Practice Exam - 3 Flashcards

1
Q

1. A company wants to run an application on AWS. The company plans to provision its application in Docker containers running in an Amazon ECS cluster. The application requires a MySQL database and the company plans to use Amazon RDS. What is the MOST cost-effective solution to meet these requirements?

  1. Creatine ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database using Reserved Instances.
  2. Create an ECS cluster using On-Demand Instances. Provision the database using On-Demand Instances.
  3. Create an ECS cluster using On-Demand Instances. Provision the database using Spot Instances.
  4. Create an ECS cluster using a fleet of Spot Instances with Spot Instance draining enabled. Provision the database using On-Demand Instances.
A
  1. Creatine ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database using Reserved Instances.
  2. Create an ECS cluster using On-Demand Instances. Provision the database using On-Demand Instances.
  3. Create an ECS cluster using On-Demand Instances. Provision the database using Spot Instances.
  4. Create an ECS cluster using a fleet of Spot Instances with Spot Instance draining enabled. Provision the database using On-Demand Instances.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company has a requirement to store documents that will be accessed by a serverless application. The documents will be accessed frequently for the first 3 months, and rarely after that. The documents must be retained for 7 years. What is the MOST cost-effective solution to meet these requirements?

  1. Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
  2. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
  3. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
  4. Store the documents in an encrypted EBS volume and create a cron job to delete the documents after 7 years.
A
  1. Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
  2. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
  3. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
  4. Store the documents in an encrypted EBS volume and create a cron job to delete the documents after 7 years.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A global enterprise company is in the process of creating an infrastructure services platform for its users. The company has the following requirements:

· Centrally manage the creation of infrastructure services using a central AWS account.

· Distribute infrastructure services to multiple accounts in AWS Organizations.

· Follow the principle of least privilege to limit end users’ permissions for launching and managing applications.

Which combination of actions using AWS services will meet these requirements? (Select TWO.)

  1. Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the lAM users that require access to the S3 bucket policy.
  2. Allow lAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
  3. Grant lAM users AWSCIoudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organization’s SCP at the AWS account root user level to deny all services except AWS Cloud Formation and Amazon S3.
  4. Allow lAM users to have AWSServiceCatalog EndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
  5. Define the infrastructure services in AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.
A
  1. Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the lAM users that require access to the S3 bucket policy.
  2. Allow lAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
  3. Grant lAM users AWSCIoudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organization’s SCP at the AWS account root user level to deny all services except AWS Cloud Formation and Amazon S3.
  4. Allow lAM users to have AWSServiceCatalog EndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
  5. Define the infrastructure services in AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A database for an eCommerce website was deployed on an Amazon RDS for MySQL DB instance with General Purpose SSD storage. The database was running performantly for several weeks until a peak shopping period when customers experienced slow performance and timeouts. Amazon CloudWatch metrics indicate that reads and writes to the DB instance were experiencing long response times. Metrics show that CPU utilization is <50%, plenty of available memory, and sufficient free storage space. There is no evidence of database connectivity issues in the application server logs.

What could be the root cause of database performance issues?

  1. The increased load resulted in the maximum number of allowed connections to the database instance.
  2. A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
  3. The increased load caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
  4. A large number of reads and writes exhausted the network bandwidth available to the RDS for MySQL DB instances.
A
  1. The increased load resulted in the maximum number of allowed connections to the database instance.
  2. A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
  3. The increased load caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
  4. A large number of reads and writes exhausted the network bandwidth available to the RDS for MySQL DB instances.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company is using multiple AWS accounts. The company’s DNS records are stored in a private Amazon Route 53 hosted zone in the management account and their applications are running in a production account.

A Solutions Architect is attempting to deploy an application into the production account. The application must resolve a CNAME record set for an Amazon RDS endpoint. The CNAME record set was created in a private hosted zone in the management account.

The deployment failed to start and the Solutions Architect has discovered that the CNAME record is not resolvable on the application EC2 instance despite being correctly created in Route 53.

Which combination of steps should the Solutions Architect take to resolve this issue? (Select TWO.)

  1. Create a private hosted zone for the record set in the production account. Configure Route 53 replication between AWS accounts.
  2. Create an authorization to associate the private hosted zone in the management account with the new VPC in the production account.
  3. Associate a new VPC in the production account with a hosted zone in the management account. Delete the association authorization in the management account.
  4. Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone.
  5. Hardcode the DNS name and IP address of the RDS database instance into the /etc/resolv.conf file on the application server.
A
  1. Create a private hosted zone for the record set in the production account. Configure Route 53 replication between AWS accounts.
  2. Create an authorization to associate the private hosted zone in the management account with the new VPC in the production account.
  3. Associate a new VPC in the production account with a hosted zone in the management account. Delete the association authorization in the management account.
  4. Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone.
  5. Hardcode the DNS name and IP address of the RDS database instance into the /etc/resolv.conf file on the application server.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

6. A new AWS Lambda function has been created to replicate objects that are received in an Amazon S3 bucket to several other S3 buckets in various AWS accounts. The Lambda function is triggered when an object creation event occurs in the main S3 bucket. A Solutions Architect is concerned that the function may impact other critical functions due to Lambda’s regional concurrency limit.

How can the solutions architect ensure the new Lambda function will not impact other critical Lambda functions?

  1. Ensure the new Lambda function implements an exponential backoff algorithm. Monitor existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.
  2. Configure Amazon S3 event notifications to publish events to an Amazon S3 queue in a different account. Create the Lambda function in the same account as the SQS queue and trigger the function when messages are published to the queue.
  3. Configure the reserved concurrency limit for the new Lamoda function. Monitor existing critical Lambda functions with (Correct) Amazon Cloud Watch alarms for the Throttles Lambda metric.
  4. Modify the execution timeout for the Lambda function to the maximum allowable value. Monitor existing critical Lambda functions with Amazon Cloud Watch alarms for the Throttles Lambda metric.
A
  1. Ensure the new Lambda function implements an exponential backoff algorithm. Monitor existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.
  2. Configure Amazon S3 event notifications to publish events to an Amazon S3 queue in a different account. Create the Lambda function in the same account as the SQS queue and trigger the function when messages are published to the queue.
  3. Configure the reserved concurrency limit for the new Lamoda function. Monitor existing critical Lambda functions with (Correct) Amazon Cloud Watch alarms for the Throttles Lambda metric.
  4. Modify the execution timeout for the Lambda function to the maximum allowable value. Monitor existing critical Lambda functions with Amazon Cloud Watch alarms for the Throttles Lambda metric.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company has a mobile application that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application is write intensive and costs have recently increased significantly. The biggest increase in cost has been for the AWS Lambda functions. Application utilization is unpredictable but has been increasing steadily each month.

A Solutions Architect has noticed that the Lambda function execution time averages over 4 minutes. This is due to wait time for a high-latency network call to an on-premises MySQL database. A VPN is used to connect to the VPC.

How can the Solutions Architect reduce the cost of the current architecture?

    • Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL
      - Enable API caching on API Gateway to reduce the number of Lambda function invocations.
      - Enable Auto Scaling in DynamoDB.
    • Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
      - Enable local caching in the mobile application to reduce the Lambda function invocation calls.
      - Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
    • Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
      - Cache the API Gateway results to Amazon CloudFront.
      - Use Amazon EC2 Reserved Instances instead of Lambda.
      - Enable Auto Scaling on EC2 and use Spot Instances during peak times.
      - Enable DynamoDB Auto Scaling to manage target utilization.
    • Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
      - Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.
      - Enable DynamoDB Accelerator for frequently accessed records and enable the DynamoDB Auto Scaling feature.
A
    • Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL

- Enable API caching on API Gateway to reduce the number of Lambda function invocations.

- Enable Auto Scaling in DynamoDB.

    • Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
      - Enable local caching in the mobile application to reduce the Lambda function invocation calls.
      - Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
    • Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
      - Cache the API Gateway results to Amazon CloudFront.
      - Use Amazon EC2 Reserved Instances instead of Lambda.
      - Enable Auto Scaling on EC2 and use Spot Instances during peak times.
      - Enable DynamoDB Auto Scaling to manage target utilization.
    • Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
      - Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.
      - Enable DynamoDB Accelerator for frequently accessed records and enable the DynamoDB Auto Scaling feature.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company has deployed an application that uses an Amazon DynamoDB table and the user base has increased significantly. Users have reported poor response times during busy periods but no error pages have been generated. The application uses Amazon DynamoDB in read-only mode. The operations team has determined that the issue relates to ProvisionedThroughputExceeded exceptions in the application logs when doing Scan and read operations.

A Solutions Architect has been tasked with improving application performance. Which solutions will meet these requirements whilst MINIMIZING changes to the application? (Select TWO.)

  1. Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.
  2. Provision an Amazon ElastiCache for Redis cluster. The cluster should be provisioned with enough shards to handle the peak application load.
  3. Include error retries and exponential backoffs in the application code to handle throttling errors and reduce load during periods of high requests.
  4. Enable adaptive capacity for the DynamoDB table to minimize throttling due to throughput exceptions.
  5. Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage.
A
  1. Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.
  2. Provision an Amazon ElastiCache for Redis cluster. The cluster should be provisioned with enough shards to handle the peak application load.
  3. Include error retries and exponential backoffs in the application code to handle throttling errors and reduce load during periods of high requests.
  4. Enable adaptive capacity for the DynamoDB table to minimize throttling due to throughput exceptions.
  5. Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company requires that only the master account in AWS Organizations is able to purchase Amazon EC2 Reserved Instances. Current and future member accounts should be blocked from purchasing Reserved Instances.

Which solution will meet these requirements?

  1. Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP (Correct) to the root of the organization.
  2. Move all current member accounts to a new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP to the new OU.
  3. Create an OU for the master account and each member account. Move the accounts into their respective CUs. Apply an SCP to the master accounts’ OU with the Allow effect for the ec2:PurchaseReservedlnstancesOffering.
  4. Create an Amazon CloudWatch Events rule that triggers a Lambda function to terminate any Reserved Instances launched by member accounts.
A
  1. Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP (Correct) to the root of the organization.
  2. Move all current member accounts to a new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP to the new OU.
  3. Create an OU for the master account and each member account. Move the accounts into their respective CUs. Apply an SCP to the master accounts’ OU with the Allow effect for the ec2:PurchaseReservedlnstancesOffering.
  4. Create an Amazon CloudWatch Events rule that triggers a Lambda function to terminate any Reserved Instances launched by member accounts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company has deployed a SAML 2.0 federated identity solution with their on-premises identity provider (IdP) to authenticate users’ access to the AWS environment. A Solutions Architect ran authentication tests through the federated identity web portal and access to the AWS environment was granted. When a test user attempts to authenticate through the federated identity web portal, they are not able to access the AWS environment.

Which items should the solutions architect check to ensure identity federation is properly configured? (Select THREE.)

  1. The lAM users permissions policy has allowed the sts:AssumeRoleWithSAML API action allowed in their permissions policy.
  2. The AWS STS service has the on-premises ldP configured as an event source for authentication requests.
  3. The lAM users are providing the time-based one-time password (TOTP) codes required for authenticated access.
  4. The lAM roles created for the federated users or federated groups trust policy have set the SAML provider as the principal.
  5. The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the lAM role, and the SAML assertion from ldP.
  6. The company’s ldP defines SAML assertions that properly map users or groups in the company to lAM roles with appropriate permissions.
A
  1. The lAM users permissions policy has allowed the sts:AssumeRoleWithSAML API action allowed in their permissions policy.
  2. The AWS STS service has the on-premises ldP configured as an event source for authentication requests.
  3. The lAM users are providing the time-based one-time password (TOTP) codes required for authenticated access.
  4. The lAM roles created for the federated users or federated groups trust policy have set the SAML provider as the principal.
  5. The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the lAM role, and the SAML assertion from ldP.
  6. The company’s ldP defines SAML assertions that properly map users or groups in the company to lAM roles with appropriate permissions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company is migrating its on-premises systems to AWS. The computers consist of a combination of Windows and Linux virtual machines and physical servers. The company wants to be able to identify dependencies between on-premises systems and group systems together into applications to build migration plans. The company also needs to understand the performance requirements for systems so they can be right-sized.

How can these requirements be met?

  1. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Connector to collect data for one week.
  2. Extract system information from an on-premises configuration management database (CM DB). Import the data directly into the Application Discovery Service.
  3. Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time.
  4. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Allow the Discovery Agent to collect data for a period of time.
A
  1. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Connector to collect data for one week.
  2. Extract system information from an on-premises configuration management database (CM DB). Import the data directly into the Application Discovery Service.
  3. Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time.
  4. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Allow the Discovery Agent to collect data for a period of time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A Solutions Architect is developing a mechanism to gain security approval for Amazon EC2 images (AMIs) so that they can be used by developers. The AMIs must go through an automated assessment process (CVE assessment) and be marked as approved before developers can use them. The approved images must be scanned every 30 days to ensure compliance.

Which combination of steps should the Solutions Architect take to meet these requirements while following best practices? (Select TWO.)

  1. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed AWS Config rule for continuous scanning on all EC2 instances and use AWS Systems Manager Automation documents for remediation.
  2. Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the approved AMIs.
  3. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager OrI Automation document on all EC2 instances every 30 days.
  4. Use Amazon Inspector to mount the CVE assessment package on the EC2 instances launched from the approved AMIs.
  5. Use AWS GuardDuty to run the CVE assessment package on the EC2 instances launched from the approved AMIs.
A
  1. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed AWS Config rule for continuous scanning on all EC2 instances and use AWS Systems Manager Automation documents for remediation.
  2. Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the approved AMIs.
  3. Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager OrI Automation document on all EC2 instances every 30 days.
  4. Use Amazon Inspector to mount the CVE assessment package on the EC2 instances launched from the approved AMIs.
  5. Use AWS GuardDuty to run the CVE assessment package on the EC2 instances launched from the approved AMIs.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company is designing an application that will require cross-Region disaster recovery with an RTO of less than 5 minutes and an RPO of less than 1 minute. The application tier DR solution has already been designed and a Solutions Architect must design the data recovery solution for the MySQL database tier.

How should the database tier be configured to meet the data recovery requirements?

  1. Use an Amazon RDS for MySQL instance with a Multi-AZ deployment.
  2. Create an Amazon RDS instance in the active Region and use a MySOL standby database on an Amazon EC2 instance in the failover Region.
  3. Use an Amazon Aurora global database with the primary in the active Region and the secondary in the failover Region.
  4. Use an Amazon RDS for MySQL instance with a cross-Region read replica in the failover Region.
A
  1. Use an Amazon RDS for MySQL instance with a Multi-AZ deployment.
  2. Create an Amazon RDS instance in the active Region and use a MySOL standby database on an Amazon EC2 instance in the failover Region.
  3. Use an Amazon Aurora global database with the primary in the active Region and the secondary in the failover Region.
  4. Use an Amazon RDS for MySQL instance with a cross-Region read replica in the failover Region.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company runs hundreds of applications across several data centers and office locations. The applications include Windows and Linux operating systems, physical installations as well as virtualized servers, and MySQL and Oracle databases. There is no central configuration management database (CMDB) and existing documentation is incomplete and outdated. A Solutions Architect needs to understand the current environment and estimate the cloud resource costs after the migration.

Which tools or services should the Solutions Architect use to plan the cloud migration (Select THREE.)

  1. AWS Cloud Adoption Readiness Tool (CART)
  2. AWS Migration Hub
  3. AWS Application Discovery Service
  4. AWS Config
  5. AWS CloudWatch Logs
  6. AWS Server Migration Service
A
  1. AWS Cloud Adoption Readiness Tool (CART)
  2. AWS Migration Hub
  3. AWS Application Discovery Service
  4. AWS Config
  5. AWS CloudWatch Logs
  6. AWS Server Migration Service
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An eCommerce company is running a promotional campaign and expects a large volume of user sign-ups on a web page that collects user information and preferences. The website runs on Amazon EC2 instances and uses an Amazon RDS for PostgreSQL DB instances. The volume of traffic is expected to be high and may be unpredictable with several spikes in activity. The traffic will result in a large number of database writes.

A solutions architect needs to build a solution that does not change the underlying data model and ensures that submissions are not dropped before they are committed to the database.

Which solution meets these requirements?

  1. Create an Amazon ElastiCache for Memcached cluster in front of the existing database instance to increase write performance.
  2. Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.
  3. Create an Amazon SQS queue and decouple the application and database layers. Configure an AWS Lambda function to write items from the queue into the database.
  4. Use scheduled scaling to scale up the existing DB instance immediately before the event and then automatically scale down afterwards.
A
  1. Create an Amazon ElastiCache for Memcached cluster in front of the existing database instance to increase write performance.
  2. Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.
  3. Create an Amazon SQS queue and decouple the application and database layers. Configure an AWS Lambda function to write items from the queue into the database.
  4. Use scheduled scaling to scale up the existing DB instance immediately before the event and then automatically scale down afterwards.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A financial services company receives a data feed from a credit card service provider. The feed consists of approximately 2,500 records that are sent every 10 minutes in plaintext and delivered over HTTPS to an encrypted S3 bucket. The data includes credit card data that must be automatically masked before sending the data to another S3 bucket for additional internal processing. There is also a requirement to remove and merge specific fields, and then transform the record into JSON format.

Which solutions will meet these requirements?

  1. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SOS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
  2. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETLjob to transform the entire record according to the (Correct) processing and transformation requirements. Define the output format as JSON. Once complete, have the ETLjob send the results to another S3 bucket for internal processing.
  3. Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETLjob to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
  4. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate task.
A
  1. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SOS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
  2. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETLjob to transform the entire record according to the (Correct) processing and transformation requirements. Define the output format as JSON. Once complete, have the ETLjob send the results to another S3 bucket for internal processing.
  3. Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETLjob to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
  4. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A solution is required for updating user metadata and will be initiated by a fleet of front-end web servers. The solution must be capable of scaling rapidly from hundreds to tens of thousands of jobs in less than a minute. The solution must be asynchronous and minimize costs.

Which solution should a Solutions Architect use to meet these requirements?

  1. Create an AWS CloudFormation stack that is updated by an AWS Lambda function. Configure the Lambda function to update the metadata.
  2. Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update the web application to initiate Step Functions for every job.
  3. Create an Amazon EC2 Auto Scaling group of EC2 instances that pull messages from an Amazon SQS queue and process the user metadata updates. Configure the web application to send jobs to the queue.
  4. Create an AWS Lambda function that will update user metadata. Create an Amazon SQS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
A
  1. Create an AWS CloudFormation stack that is updated by an AWS Lambda function. Configure the Lambda function to update the metadata.
  2. Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update the web application to initiate Step Functions for every job.
  3. Create an Amazon EC2 Auto Scaling group of EC2 instances that pull messages from an Amazon SQS queue and process the user metadata updates. Configure the web application to send jobs to the queue.
  4. Create an AWS Lambda function that will update user metadata. Create an Amazon SQS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company uses AWS Organizations. The company recently acquired a new business unit and invited the new unit’s existing account to the company’s organization. The organization uses a deny list SCP in the root of the organization and all accounts are members of a single OU named Production.

The administrators of the new business unit discovered that they are unable to access AWS Database Migration Service (DMS) to complete an in-progress migration.

Which option will temporarily allow administrators to access AWS DMS and complete the migration project?

  1. Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the organizations deny list SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS DMS are complete.
  2. Convert the organizations root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporarily apply an SCP to the organization root that allows AWS DMS actions for principals only in the new account.
  3. Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the new account to the Production OU when the migration project is complete.
  4. Remove the organization’s root SCPs that limit access to AWS DMS. Create an SCP that allows AWS DMS actions and apply the SCP to the Production OU.
A
  1. Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the organizations deny list SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS DMS are complete.
  2. Convert the organizations root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporarily apply an SCP to the organization root that allows AWS DMS actions for principals only in the new account.
  3. Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the new account to the Production OU when the migration project is complete.
  4. Remove the organization’s root SCPs that limit access to AWS DMS. Create an SCP that allows AWS DMS actions and apply the SCP to the Production OU.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A company is testing an application that collects data from sensors fitted to vehicles. The application collects usage statistics data every 4 minutes. The data is sent to Amazon API Gateway, it is then processed by an AWS Lambda function and the results are stored in an Amazon DynamoDB table.

As the sensors have been fitted to more vehicles, and as more metrics have been configured for collection, the Lambda function execution time has increased from a few seconds to over 2 minutes. There are also many TooManyRequestsException errors being generated by Lambda.

Which combination of changes will resolve these issues? (Select TWO.)

  1. Collect data in an Amazon SQS FIFO queue, which triggers a Lambda function to process each message.
  2. Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches.
  3. Increase the CPU units assigned to the Lambda functions.
  4. Use Amazon EC? instead of Lambda to process the data.
  5. Increase the memory available to the Lambda functions.
A
  1. Collect data in an Amazon SQS FIFO queue, which triggers a Lambda function to process each message.
  2. Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches.
  3. Increase the CPU units assigned to the Lambda functions.
  4. Use Amazon EC? instead of Lambda to process the data.
  5. Increase the memory available to the Lambda functions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A Solutions Architect is designing a web application that will serve static content in an Amazon S3 bucket and dynamic content hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The application will use Amazon CloudFront and the solution should require that the content is available through CloudFront only.

Which combination of steps should the Solutions Architect take to restrict direct content access to CloudFront? (Select THREE.)

  1. Create a CloudFront Origin Access Identity (CAl) and add it to the CloudFront distribution. Update the S3 bucket policy to (Correct) allow access to the CAl only.
  2. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution.
  3. Configure CloudFront to add a custom header to requests that it sends to the origin.
  4. Configure the ALB to add a custom header to HTTP requests that are sent to the EC2 instances.
  5. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.
  6. Configure an S3 bucket policy to allow access from the CloudFront IP addresses only.
A
  1. Create a CloudFront Origin Access Identity (CAl) and add it to the CloudFront distribution. Update the S3 bucket policy to (Correct) allow access to the CAl only.
  2. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution.
  3. Configure CloudFront to add a custom header to requests that it sends to the origin.
  4. Configure the ALB to add a custom header to HTTP requests that are sent to the EC2 instances.
  5. Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.
  6. Configure an S3 bucket policy to allow access from the CloudFront IP addresses only.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A company runs a data processing application on-premises and plans to move it to the AWS Cloud. Files are uploaded by users to a web application which then stores the files on an NFS-based storage system and places a message on a queue. The files are then processed from the queue and the results are returned to the user (and stored in long-term storage). This process can take up to 30 minutes. The processing times vary significantly and can be much higher during business hours.

What is the MOST cost-effective migration recommendation?

  1. Create a queue using Amazon SQS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in an Amazon S3 bucket.
  2. Create a queue using Amazon MOE Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS.
  3. Create a queue using Amazon SOS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use Amazon EC2 instances in an EC? Auto Scaling group to pull (C Æd) requests from the queue and process the files. Scale the EC2 or instances based on the SOS queue length. Store the processed files in an Amazon S3 bucket
  4. Create a queue using Amazon MO. Run the web application on Amazon EC2 and configure it to publish to the new queue. Launch an Amazon EC2 instance from a preconfigured AMI to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS. Terminate the EC2 instance after the task is complete.
A
  1. Create a queue using Amazon SQS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in an Amazon S3 bucket.
  2. Create a queue using Amazon MOE Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS.
  3. Create a queue using Amazon SOS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use Amazon EC2 instances in an EC? Auto Scaling group to pull (C Æd) requests from the queue and process the files. Scale the EC2 or instances based on the SOS queue length. Store the processed files in an Amazon S3 bucket
  4. Create a queue using Amazon MO. Run the web application on Amazon EC2 and configure it to publish to the new queue. Launch an Amazon EC2 instance from a preconfigured AMI to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS. Terminate the EC2 instance after the task is complete.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A new application that provides fitness and training advice has become extremely popular with thousands of new users from around the world. The web application is hosted on a fleet of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The content consists of static media files and different resources must be loaded depending on the client operating system.

Users have reported increasing latency for loading web pages and Amazon CloudWatch is showing high utilization of the EC2 instances.

Which set actions should a solutions architect take to improve response times?

  1. Create a separate ALB for each client operating system. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different ALBs depending on the User-Agent HTTP header.
  2. Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User- Agent HTTP header.
  3. Move content to Amazon 53. Create an Amazon CloudFront distribution to serve content out of the 53 buckets Use the User-Agent H1IP header to load different content.
  4. Create separate Auto Scaling groups based on dient operating systems. Switch to a Network Load Balancer (NIB). Use the User-Agent HTTP header in the NIB to route to a different set of EC2 instances.
A
  1. Create a separate ALB for each client operating system. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different ALBs depending on the User-Agent HTTP header.
  2. Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User- Agent HTTP header.
  3. Move content to Amazon 53. Create an Amazon CloudFront distribution to serve content out of the 53 buckets Use the User-Agent H1IP header to load different content.
  4. Create separate Auto Scaling groups based on dient operating systems. Switch to a Network Load Balancer (NIB). Use the User-Agent HTTP header in the NIB to route to a different set of EC2 instances.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A company includes several business units that each use a separate AWS account and a parent company AWS account. The company requires a single AWS bill across all AWS accounts with costs broken out for each business unit. The company also requires that services and features be restricted in the business unit accounts and this must be governed centrally.

Which combination of steps should a Solutions Architect take to meet these requirements? (Select TWO.)

  1. Use permissions boundaries applied to each business unit’s AWS account to define the maximum permissions available for services and features.
  2. Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit’s AWS account to join the organization.
  3. Use AWS Organizations to create a separate organization for each AWS account with all features enabled. Then, create trust relationships between the AWS organizations.
  4. Enable consolidated billing in the parent accounts billing console and link the business unit AWS accounts.
  5. Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.
A
  1. Use permissions boundaries applied to each business unit’s AWS account to define the maximum permissions available for services and features.
  2. Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit’s AWS account to join the organization.
  3. Use AWS Organizations to create a separate organization for each AWS account with all features enabled. Then, create trust relationships between the AWS organizations.
  4. Enable consolidated billing in the parent accounts billing console and link the business unit AWS accounts.
  5. Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A company is migrating an order processing application to the AWS Cloud. The usage patterns vary significantly but the application must be available at all times. Orders must be processed immediately and in the order that they are received. Which actions should a Solutions Architect take to meet these requirements?

  1. Use Amazon SOS with ElFO to queue messages in the correct order. Use Spot Instances in multiple Availability Zones for processing.
  2. Use Amazon SNS with ElFO to send orders in the correct order. Use Spot Instances in multiple Availability Zones for processing.
  3. Use Amazon SQS with FIFO to queue messages in the correct order. Use Reserved Instances in multiple Availability Zones for processing.
  4. Use Amazon SNS with FlEO to send orders in the correct order. Use a single large Reserved Instance for processing.
A
  1. Use Amazon SOS with ElFO to queue messages in the correct order. Use Spot Instances in multiple Availability Zones for processing.
  2. Use Amazon SNS with ElFO to send orders in the correct order. Use Spot Instances in multiple Availability Zones for processing.
  3. Use Amazon SQS with FIFO to queue messages in the correct order. Use Reserved Instances in multiple Availability Zones for processing.
  4. Use Amazon SNS with FlEO to send orders in the correct order. Use a single large Reserved Instance for processing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

An application consists of three tiers within a single Region. A Solutions Architect is designing a disaster recovery strategy that includes an RTO of 30 minutes and an RPO of 5 minutes for the data tier. Application tiers use Amazon EC2 instances and are stateless. The data tier consists of a 30TB Amazon Aurora database.

Which combination of steps satisfies the RTO and RPO requirements while optimizing costs? (Select TWO.)

  1. Create a cross-Region Aurora Replica of the database
  2. Deploy a hot standby of the application tiers to another Region
  3. Use AWS DMS to replicate the Aurora DB to an RDS database in another Region.
  4. Create snapshots of the Aurora database every 5 minutes.
  5. Create daily snapshots of the EC2 instances and replicate them to another Region.
A
  1. Create a cross-Region Aurora Replica of the database
  2. Deploy a hot standby of the application tiers to another Region
  3. Use AWS DMS to replicate the Aurora DB to an RDS database in another Region.
  4. Create snapshots of the Aurora database every 5 minutes.
  5. Create daily snapshots of the EC2 instances and replicate them to another Region.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A company is running a custom Java application on-premises and plans to migrate the application to the AWS Cloud. The application uses a MySQL database and the application servers maintain users’ sessions locally. Which combination of architecture changes will be required to create a highly available solution on AWS? (Select THREE.)

  1. Put the application instances in an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to create new instances if an instance becomes unhealthy.
  2. Move the Java content to an Amazon S3 bucket configured for static website hosting. Configure cross-Region replication for the S3 bucket contents.
  3. Migrate the database to Amazon RDS for MySQL Configure the RDS instance to use a Multi-AZ deployment.
  4. Configure the application to store the user’s session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between application instances.
  5. Configure the application to run in multiple Regions. Use an Application Load Balancer to distribute the load between application instances.
  6. Migrate the database to Amazon EC2 instances in multiple Availability Zones. Configure Multi-AZ to synchronize the changes.
A
  1. Put the application instances in an Amazon EC2 Auto Scaling group. Configure the Auto Scaling group to create new instances if an instance becomes unhealthy.
  2. Move the Java content to an Amazon S3 bucket configured for static website hosting. Configure cross-Region replication for the S3 bucket contents.
  3. Migrate the database to Amazon RDS for MySQL Configure the RDS instance to use a Multi-AZ deployment.
  4. Configure the application to store the user’s session in Amazon ElastiCache. Use Application Load Balancers to distribute the load between application instances.
  5. Configure the application to run in multiple Regions. Use an Application Load Balancer to distribute the load between application instances.
  6. Migrate the database to Amazon EC2 instances in multiple Availability Zones. Configure Multi-AZ to synchronize the changes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A company has an NFS file server on-premises with 50 TB of data that is being migrated to Amazon S3. The data is made up of many millions of small files and a Snowball Edge device is being used for the migration. A shell script is being used to copy data using the file interface of the Snowball Edge device. Data transfer times are very slow and the Solutions Architect suspects this may be related to the overhead of encrypting all the small files and copying them over the network.

What change should be made to improve data transfer times?

  1. Modify the shell script to ensure that individual files are being copied rather than directories.
  2. Connect directly to the USB interface on the Snowball Edge device and copy the files locally.
  3. Cluster two Snowball Edge devices together to increase the throughput of the devices.
  4. Perform multiple copy operations at one time by running each command from a separate terminal window, in separate (Correct) instances of the Snowball client.
A
  1. Modify the shell script to ensure that individual files are being copied rather than directories.
  2. Connect directly to the USB interface on the Snowball Edge device and copy the files locally.
  3. Cluster two Snowball Edge devices together to increase the throughput of the devices.
  4. Perform multiple copy operations at one time by running each command from a separate terminal window, in separate (Correct) instances of the Snowball client.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A Solutions Architect needs to design the architecture for an application that requires high availability within and across AWS Regions. The design must support failover to the second Region within 1 minute and must minimize the impact on the user experience. The application will include three tiers, the web tier, application tier and NoSQL data tier.

Which combination of steps will meet these requirements? (Select THREE.)

  1. Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.
  2. Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
  3. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.
  4. Use an Amazon Aurora global database across both Regions so reads and writes can occur in either location.
  5. Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.
  6. Use an Amazon Route 53 weighted routing policy set to loo/o across the two selected Regions. Set Time to Live (TTL) to 30 minutes.
A
  1. Use Amazon DynamoDB with a global table across both Regions so reads and writes can occur in either location.
  2. Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use zonal Reserved Instances for the minimum number of servers and On-Demand Instances for any additional resources.
  3. Use an Amazon Route 53 failover routing policy for failover from the primary Region to the disaster recovery Region. Set Time to Live (TTL) to 30 seconds.
  4. Use an Amazon Aurora global database across both Regions so reads and writes can occur in either location.
  5. Run the web and application tiers in both Regions in an active/active configuration. Use Auto Scaling groups for the web and application layers across multiple Availability Zones in the Regions. Use Spot Instances for the required resources.
  6. Use an Amazon Route 53 weighted routing policy set to loo/o across the two selected Regions. Set Time to Live (TTL) to 30 minutes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A company is using AWS CloudFormation templates for infrastructure provisioning. The templates are hosted in the company’s private GitHub repository. The company has experienced several issues with updates to the templates that have caused errors when executing the updates and creating the environment. A Solutions Architect must resolve these issues and implement automated testing of the CloudFormation template updates.

How can the Solutions Architect accomplish these requirements?

  1. Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeDeploy to create and execute a change set. Configure CodeDeploy to test the environment using testing scripts run by AWS CodeBuild.
  2. Use AWS CodePipeline to create and execute a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeDeploy. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
  3. Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeBuild to create and execute a change set from the templates in GitHub. Configure CodeBuild to test the deployment with testing scripts.
  4. Use AWS CodePipeline to create a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeBuild. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
A
  1. Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeDeploy to create and execute a change set. Configure CodeDeploy to test the environment using testing scripts run by AWS CodeBuild.
  2. Use AWS CodePipeline to create and execute a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeDeploy. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
  3. Use AWS Lambda to synchronize the contents of the GitHub repository to AWS CodeCommit. Use AWS CodeBuild to create and execute a change set from the templates in GitHub. Configure CodeBuild to test the deployment with testing scripts.
  4. Use AWS CodePipeline to create a change set when updates are made to the CloudFormation templates in GitHub. Include a CodePipeline action to test the deployment with testing scripts run using AWS CodeBuild. Upon successful testing, configure CodePipeline to execute the change set and deploy to production.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A Solution Architect used the AWS Application Discovery Service to gather information about some on-premises database servers. The tool discovered an Oracle data warehouse and several MySQL databases. The company plans to migrate to AWS and the Solutions Architect must determine the best migration pattern for each database.

Which combination of migration patterns will reduce licensing costs and operational overhead? (Select TWO.)

  1. Migrate the Oracle data warehouse to an Amazon ElastiCache for Redis cluster using AWS DMS.
  2. Migrate the MySQL databases to Amazon RDS for MySQL using AWS DMS.
  3. Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS.
  4. Lift and shift the Oracle data warehouse to Amazon EC2 using AWS Snowball.
  5. Lift and shift the MySQL databases to Amazon EC2 using AWS Snowball.
A
  1. Migrate the Oracle data warehouse to an Amazon ElastiCache for Redis cluster using AWS DMS.
  2. Migrate the MySQL databases to Amazon RDS for MySQL using AWS DMS.
  3. Migrate the Oracle data warehouse to Amazon Redshift using AWS SCT and AWS DMS.
  4. Lift and shift the Oracle data warehouse to Amazon EC2 using AWS Snowball.
  5. Lift and shift the MySQL databases to Amazon EC2 using AWS Snowball.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

A developer is attempting to access an Amazon S3 bucket in a member account in AWS Organizations. The developer is logged in to the account with user credentials and has received an access denied error with no bucket listed. The developer should have read-only access to all buckets in the account.

A Solutions Architect has reviewed the permissions and found that the developer’s IAM user has been granted read-only access to all S3 buckets in the account.

Which additional steps should the Solutions Architect take to troubleshoot the issue? (Select TWO.)

  1. Check the ACLs for all S3 buckets.
  2. Check the bucket policies for all S3 buckets.
  3. Check for the permissions boundaries set for the lAM user.
  4. Check if an appropriate lAM role is attached to the lAM user.
  5. Check the SCPs set at the organizational units (OUs).
A
  1. Check the ACLs for all S3 buckets.
  2. Check the bucket policies for all S3 buckets.
  3. Check for the permissions boundaries set for the lAM user.
  4. Check if an appropriate lAM role is attached to the lAM user.
  5. Check the SCPs set at the organizational units (OUs).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

A company is moving their IT infrastructure to the AWS Cloud and will have several Amazon VPCs across multiple Regions. The company requires centralized and controlled egress-only internet access. The solution must be highly available and horizontally scalable. The company is expecting to grow the number of VPCs to more than fifty.

A Solutions Architect is designing the network for the new cloud deployment. Which design pattern will meet the stated requirements?

  1. Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and attach the transit gateway.
  2. Attach each VPC to a centralized transit VPC with a VPN connection to each standalone VPC. Outbound internet traffic will be controlled by firewall appliances.
  3. Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP.
  4. Attach each VPC to a shared centralized VPC. Configure VPC peering between each VPC and the centralized VPC. Configure a NAT gateway in two AZs within the centralized VPC.
A
  1. Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and attach the transit gateway.
  2. Attach each VPC to a centralized transit VPC with a VPN connection to each standalone VPC. Outbound internet traffic will be controlled by firewall appliances.
  3. Attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP.
  4. Attach each VPC to a shared centralized VPC. Configure VPC peering between each VPC and the centralized VPC. Configure a NAT gateway in two AZs within the centralized VPC.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A company provides a service that allows users to upload high-resolution product images using an app on their phones for a price matching service. The service currently uses Amazon S3 in the us-west-1 Region. The company has expanded to Europe and users in European countries are experiencing significant delays when uploading images.

Which combination of changes can a Solutions Architect make to improve the upload times for the images? (Select TWO.)

  1. Redeploy the application to use Amazon S3 multipart upload.
  2. Create an Amazon CloudFront distribution with the S3 bucket as an origin.
  3. Modify the Amazon S3 bucket to use Intelligent Tiering.
  4. Configure the client application to use byte-range fetches.
  5. Configure the S3 bucket to use S3 Transfer Acceleration.
A
  1. Redeploy the application to use Amazon S3 multipart upload.
  2. Create an Amazon CloudFront distribution with the S3 bucket as an origin.
  3. Modify the Amazon S3 bucket to use Intelligent Tiering.
  4. Configure the client application to use byte-range fetches.
  5. Configure the S3 bucket to use S3 Transfer Acceleration.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

A company plans to build a gaming application in the AWS Cloud that will be used by Internet-based users. The application will run on a single instance and connections from users will be made over the UDP protocol. The company has requested that the service is implemented with a high level of security. A Solutions Architect has been asked to design a solution for the application on AWS.

Which combination of steps should the Solutions Architect take to meet these requirements? (Select THREE.)

  1. Use an Application Load Balancer (ALB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the ALBs internet facing a fully qualified domain name (FQDN).
  2. Enable AWS Shield Advanced on all public-facing resources.
  3. Use a Network Load Balancer (NLB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLBs Elastic IP address.
  4. Define an AWS WAF rule to explicitly drop non-UDP traffic and associate the rule with the load balancer.
  5. Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
  6. Use AWS Global Accelerator with an Elastic Load Balancer as an endpoint
A
  1. Use an Application Load Balancer (ALB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the ALBs internet facing a fully qualified domain name (FQDN).
  2. Enable AWS Shield Advanced on all public-facing resources.
  3. Use a Network Load Balancer (NLB) in front of the application instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLBs Elastic IP address.
  4. Define an AWS WAF rule to explicitly drop non-UDP traffic and associate the rule with the load balancer.
  5. Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
  6. Use AWS Global Accelerator with an Elastic Load Balancer as an endpoint
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A company has a large photo library stored on Amazon S3. They use AWS Lambda to extract metadata from the files according to various processing rules for different categories of photo. The output is then stored in an Amazon DynamoDB table.

The extraction process is performed whenever customer requests are submitted and can take up to 60 minutes to complete. The company wants to reduce the time taken to extract the metadata and has split the single Lambda function into separate Lambda functions for each category of photo.

Which additional steps should the Solutions Architect take to meet the requirements?

  1. Create an AWS Batch compute environment for each Lambda function. Configure an AWS Batch job queue for the computer environment. Create a Lambda function to retrieve a list of files and write each item to the job queue.
  2. Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Subscribe the metadata extraction Lambda functions to the SQS queue with a large batch size.
  3. Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create another Step Functions workflow that retrieves a list of files and executes a metadata extraction workflow for each one.
  4. Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Configure or the SQS queue as an input to the Step Functions workflow.
A
  1. Create an AWS Batch compute environment for each Lambda function. Configure an AWS Batch job queue for the computer environment. Create a Lambda function to retrieve a list of files and write each item to the job queue.
  2. Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Subscribe the metadata extraction Lambda functions to the SQS queue with a large batch size.
  3. Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create another Step Functions workflow that retrieves a list of files and executes a metadata extraction workflow for each one.
  4. Create an AWS Step Functions workflow to run the Lambda functions in parallel. Create a Lambda function to retrieve a list of files and write each item to an Amazon SQS queue. Configure or the SQS queue as an input to the Step Functions workflow.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

A company has deployed two Microsoft Active Directory Domain Controllers into an Amazon VPC with a default configuration. The DHCP options set associated with the VPC has been configured to assign the IP addresses of the Domain Controllers as DNS servers. A VPC interface endpoint has been created but EC2 instances within the VPC are unable to resolve the private endpoint addresses.

Which strategies could a Solutions Architect use to resolve the issue? (Select TWO.)

  1. Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.
  2. Define an inbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers.
  3. Update the DNS service on the Active Directory servers to forward all queries to the VPC Resolver.
  4. Define an outbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers.
  5. Configure the DNS service on the EC2 instances in the VPC to use the VPC resolver server as the secondary DNS server.
A
  1. Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.
  2. Define an inbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers.
  3. Update the DNS service on the Active Directory servers to forward all queries to the VPC Resolver.
  4. Define an outbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers.
  5. Configure the DNS service on the EC2 instances in the VPC to use the VPC resolver server as the secondary DNS server.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A company uses Amazon RedShift for analytics. Several teams deploy and manage their own RedShift clusters and management has requested that the costs for these clusters is better managed. The management team has set budgets and once the budgetary thresholds have been reached a notification should be sent to a distribution list for managers. Teams should be able to view their RedShift cluster’s expenses to date. A Solutions Architect needs to create a solution that ensures the policy is centrally enforced in a multi-account environment.

Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)

  1. Create an AWS CloudTrail trail that tracks data events. Configure Amazon CloudWatch to monitor the trail and trigger an alarm when billing metrics exceed a certain threshold.
  2. Create an Amazon Cloud Watch metric for billing. Create a custom alert when costs exceed the budgetary threshold.
  3. Install the unified Cloud Watch Agent on the RedShift cluster hosts. Track the billing metric data in CloudWatch and trigger an alarm when a threshold is reached.
  4. Create an AWS Service Catalog portfolio for each team. Add each team’s Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product.
  5. Update the AWS CloudFormation template to include the AWS:: Budgets::Budget::resource with the NotificationsWithSubscribers property.
A
  1. Create an AWS CloudTrail trail that tracks data events. Configure Amazon CloudWatch to monitor the trail and trigger an alarm when billing metrics exceed a certain threshold.
  2. Create an Amazon Cloud Watch metric for billing. Create a custom alert when costs exceed the budgetary threshold.
  3. Install the unified Cloud Watch Agent on the RedShift cluster hosts. Track the billing metric data in CloudWatch and trigger an alarm when a threshold is reached.
  4. Create an AWS Service Catalog portfolio for each team. Add each team’s Amazon RedShift cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product.
  5. Update the AWS CloudFormation template to include the AWS:: Budgets::Budget::resource with the NotificationsWithSubscribers property.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A company has deployed a new application into an Amazon VPC that does not have Internet access. The company has connected an AWS Direct Connection (DX) private VIF to the VPC and all communications will be over the DX connection. A new requirement states that all data in transit must be encrypted between users and the VPC.

Which strategy should a Solutions Architect use to maintain consistent network performance while meeting this new requirement?

  1. Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private virtual interface.
  2. Create a client VPN endpoint and configure the users’ computers to use an AWS client VPN to connect to the VPC over the Internet.
  3. Create a new Site-to-Site VPN that connects to the VPC over the internet.
  4. Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
A
  1. Create a new private virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX private virtual interface.
  2. Create a client VPN endpoint and configure the users’ computers to use an AWS client VPN to connect to the VPC over the Internet.
  3. Create a new Site-to-Site VPN that connects to the VPC over the internet.
  4. Create a new public virtual interface for the existing DX connection, and create a new VPN that connects to the VPC over the DX public virtual interface.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

An application runs on an Amazon EC2 instance with an attached Amazon EBS Provisioned IOPS (PIOPS) volume. The volume is configured at 200-GB in size and has 3,000 IOPS provisioned. The application requires low latency and random access to the data. A Solutions Architect has been asked to consider options for lowering the cost of the storage without impacting performance and durability.

What should the Solutions Architect recommend?

  1. Create an Amazon EFS file system with the throughput mode set to Provisioned. Mount the EFS file system to the EC2 operating system.
  2. Change the PIOPS volume for a 1-TB EBS General Purpose SSD (gp2) volume.
  3. Create an Amazon EFS file system with the performance mode set to Max I/O. Mount the EFS file system to the EC2 operating system.
  4. Change the PIOPS volume for a 1-TB Throughput Optimized HDD (st1) volume.
A
  1. Create an Amazon EFS file system with the throughput mode set to Provisioned. Mount the EFS file system to the EC2 operating system.
  2. Change the PIOPS volume for a 1-TB EBS General Purpose SSD (gp2) volume.
  3. Create an Amazon EFS file system with the performance mode set to Max I/O. Mount the EFS file system to the EC2 operating system.
  4. Change the PIOPS volume for a 1-TB Throughput Optimized HDD (st1) volume.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

A company is deploying a web service that will provide read and write access to structured data. The company expects there to be variable usage patterns with some short but significant spikes. The service must dynamically scale and must be fault tolerant across multiple AWS Regions.

Which actions should a Solutions Architect take to meet these requirements?

  1. Store the data in Amazon DocumentDB in two Regions. Use AWS DMS to synchronize data between databases. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a failover routing policy.
  2. Store the data in Amazon S3 buckets in two Regions and configure cross Region replication. Create an Amazon CloudFront distribution that points to multiple origins. Use Amazon API Gateway and AWS Lambda for the web frontend and configure Amazon Route 53 with an alias record pointing to the REST API.
  3. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. Run the web service in both Regions as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record and a latency-based routing policy with health checks to distribute traffic between the two ALBs.
  4. Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a multi-value routing policy.
A
  1. Store the data in Amazon DocumentDB in two Regions. Use AWS DMS to synchronize data between databases. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a failover routing policy.
  2. Store the data in Amazon S3 buckets in two Regions and configure cross Region replication. Create an Amazon CloudFront distribution that points to multiple origins. Use Amazon API Gateway and AWS Lambda for the web frontend and configure Amazon Route 53 with an alias record pointing to the REST API.
  3. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode. Run the web service in both Regions as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB). In Amazon Route 53, configure an alias record and a latency-based routing policy with health checks to distribute traffic between the two ALBs.
  4. Store the data in Amazon Aurora global databases. Add Auto Scaling replicas to both Regions. Run the web service on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer in each Region. In Amazon Route 53, configure an alias record and a multi-value routing policy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A company recently noticed an increase in costs associated with Amazon EC2 instances and Amazon RDS databases. The company needs to be able to track the costs. The company uses AWS Organizations for all of their accounts. AWS CloudFormation is used for deploying infrastructure and all resources are tagged. The management team has requested that cost center numbers and project ID numbers are added to all future EC2 instances and RDS databases.

What is the MOST efficient strategy a Solutions Architect should follow to meet these requirements?

  1. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate.
  2. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.
  3. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified.
  4. Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role.
A
  1. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate.
  2. Use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and project ID. Use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.
  3. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to activate. Use permissions boundaries to restrict the creation of resources that do not have the cost center and project ID tags specified.
  4. Use an AWS Config rule to check for untagged resources. Create a centralized AWS Lambda based solution to tag untagged EC2 instances and RDS databases every hour using a cross-account role.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

A company is planning to build a high-performance computing (HPC) solution in the AWS Cloud. The solution will include a 10-node cluster running Linux. High speed and low latency inter-instance connectivity is required to optimize the performance of the cluster.

Which combination of steps will meet these requirements? (Choose two.)

  1. Deploy instances across at least three Availability Zones.
  2. Deploy Amazon EC2 instances in a cluster placement group.
  3. Use Amazon EC2 instances that support burstable performance.
  4. Use Amazon EC2 instance types and AMIs that support EFA.
  5. Deploy Amazon EC2 instances in a partition placement group.
A
  1. Deploy instances across at least three Availability Zones.
  2. Deploy Amazon EC2 instances in a cluster placement group.
  3. Use Amazon EC2 instances that support burstable performance.
  4. Use Amazon EC2 instance types and AMIs that support EFA.
  5. Deploy Amazon EC2 instances in a partition placement group.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

A company runs an application that generates user activity reports and stores them in an Amazon S3 bucket. Users are able to download the reports using the application which generates a signed URL. A user recently reported that the reports of other users can be accessed directly from the S3 bucket. A Solutions Architect reviewed the bucket permissions and discovered that public access is currently enabled.

How can the documents be protected from unauthorized access without modifying the application workflow?

  1. Use the Block Public Access feature in Amazon S3 to set the lgnorePublicAcls option to TRUE on the bucket.
  2. Configure server access logging and monitor the log files to check for unauthorized access.
  3. Modify the settings on the S3 bucket to enable default encryption for all objects.
  4. Use the Block Public Access feature in Amazon S3 to set the BlockPublicPolicy option to TRUE on the bucket.
A
  1. Use the Block Public Access feature in Amazon S3 to set the lgnorePublicAcls option to TRUE on the bucket.
  2. Configure server access logging and monitor the log files to check for unauthorized access.
  3. Modify the settings on the S3 bucket to enable default encryption for all objects.
  4. Use the Block Public Access feature in Amazon S3 to set the BlockPublicPolicy option to TRUE on the bucket.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

A company has created a service that they would like a customer to access. The service runs in the company’s AWS account and the customer has a separate AWS account. The company would like to enable the customer to establish least privilege security access using an API or command line tool to the customer account.

What is the MOST secure way to enable the customer to access the service?

  1. The company should create an lAM role and assign the required permissions to the lAM role. The customer should then use the lAM roles Amazon Resource Name (ARN) when requesting access to perform the required tasks.
  2. The company should provide the customer with their AWS account access keys to log in and perform the required tasks.
  3. The company should create an lAM role and assign the required permissions to the lAM role. The customer should then use the lAM roles Amazon Resource Name (ARN), including the external ID in the lAM role’s trust policy, when requesting
  4. The company should create an lAM user and assign the required permissions to the lAM user. The company should then provide the credentials to the customer to login and perform the required tasks.
A
  1. The company should create an lAM role and assign the required permissions to the lAM role. The customer should then use the lAM roles Amazon Resource Name (ARN) when requesting access to perform the required tasks.
  2. The company should provide the customer with their AWS account access keys to log in and perform the required tasks.
  3. The company should create an lAM role and assign the required permissions to the lAM role. The customer should then use the lAM roles Amazon Resource Name (ARN), including the external ID in the lAM role’s trust policy, when requesting
  4. The company should create an lAM user and assign the required permissions to the lAM user. The company should then provide the credentials to the customer to login and perform the required tasks.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

A company currently manages a fleet of Amazon EC2 instances running Windows and Linux in public and private subnets. The operations team currently connects over the Internet to manage the instances as there is no connection to the corporate network.

Security groups have been updated to allow the RDP and SSH protocols from any source IPv4 address. There have been reports of malicious attempts to access the resources as the company wishes to implement the most secure solution for managing the instances.

Which strategy should a Solutions Architect recommend?

  1. Deploy the AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users with permission to manage the instances.
  2. Deploy a Linux bastion host with an Elastic IP address in the public subnet. Allow access to the bastion host from 0.0.0.0/0.
  3. Deploy a server on the corporate network that can be used for managing EC2 instances. Update the security groups to allow connections over SSH and RDP from the on-premises management server only.
  4. Configure an IPSec Virtual Private Network (VPN) connecting the corporate network to the Amazon VPC. Update security groups to allow connections over SSH and RDP from the corporate network only.
A
  1. Deploy the AWS Systems Manager Agent on the EC2 instances. Access the EC2 instances using Session Manager restricting access to users with permission to manage the instances.
  2. Deploy a Linux bastion host with an Elastic IP address in the public subnet. Allow access to the bastion host from 0.0.0.0/0.
  3. Deploy a server on the corporate network that can be used for managing EC2 instances. Update the security groups to allow connections over SSH and RDP from the on-premises management server only.
  4. Configure an IPSec Virtual Private Network (VPN) connecting the corporate network to the Amazon VPC. Update security groups to allow connections over SSH and RDP from the corporate network only.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

A Solutions Architect is migrating an application to AWS Fargate. The task runs in a private subnet and does not have direct connectivity to the internet. When the Fargate task is launched, it fails with the following error:

CannotPullContainerError: API error (500): Get https://111122223333.dkr.ecr.us-east-1.amazonaws.com/v2/: net/http: request canceled while waiting for connection”

What should the Solutions Architect do to correct the error?

  1. Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a public subnet to route requests to the internet.
  2. Enable dual-stack in the Amazon ECS account settings and configure the network for the task to use awsvpc.
  3. Specify ENABLED for Auto-assign public IP when launching the task.
  4. Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a private subnet to route requests to the internet.
A
  1. Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a public subnet to route requests to the internet.
  2. Enable dual-stack in the Amazon ECS account settings and configure the network for the task to use awsvpc.
  3. Specify ENABLED for Auto-assign public IP when launching the task.
  4. Specify DISABLED for Auto-assign public IP when launching the task and configure a NAT gateway in a private subnet to route requests to the internet.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

A Solutions Architect has deployed an application on Amazon EC2 instances in a private subnet behind a Network Load Balancer (NLB) in a public subnet. Customers have attempted to connect from their office location and are unable to access the application. Those targets were registered by instance-id and are all healthy in the associated target group.

What step should the Solutions Architect take to resolve the issue and enable access for the customers?

  1. Check the security group for the EC2 instances to ensure it allows ingress from the NLB subnets.
  2. Check the security group for the NLB to ensure it allows egress to the private subnet.
  3. Check the security group for the EC2 instances to ensure it allows ingress from the customer office.
  4. Check the security group for the NLB to ensure it allows ingress from the customer office.
A
  1. Check the security group for the EC2 instances to ensure it allows ingress from the NLB subnets.
  2. Check the security group for the NLB to ensure it allows egress to the private subnet.
  3. Check the security group for the EC2 instances to ensure it allows ingress from the customer office.
  4. Check the security group for the NLB to ensure it allows ingress from the customer office.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

A serverless application is using AWS Lambda and Amazon DynamoDB and developers have finalized an update to the Lambda function code. AWS CodeDeploy will be used to deploy new versions of the function. Updates to the Lambda function should be delivered to a subset of users before deploying the changes to all users. The update process should also be easy to abort and rollback if necessary.

Which CodeDeploy configuration should the solutions architect use?

  1. A linear deployment
  2. A canary deployment
  3. An all-at-once deployment
  4. A blue/green deployment
A
  1. A linear deployment
  2. A canary deployment
  3. An all-at-once deployment
  4. A blue/green deployment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

A company is planning to migrate an application from an on-premises data center to the AWS Cloud. The application consists of stateful servers and a separate MySQL database. The application is expected to receive significant traffic and must scale seamlessly. The solution design on AWS includes an Amazon Aurora MySQL database, Amazon EC2 Auto Scaling and Elastic Load Balancing.

A Solutions Architect needs to finalize the design for the solution. Which of the following configurations will ensure a consistent user experience and seamless scalability for both the application and database tiers?

  1. Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to round_robin.
  2. Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
  3. Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
  4. Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to round_robin.
A
  1. Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to round_robin.
  2. Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
  3. Add Aurora Replicas and define a scaling policy. Use a Network Load Balancer and set the load balancing algorithm type to least_outstanding_requests.
  4. Add Aurora Replicas and define a scaling policy. Use an Application Load Balancer and set the load balancing algorithm type to round_robin.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

A company uses multiple AWS accounts. There are separate accounts for development, staging, and production environments. Some new requirements have been issued to control costs and improve the overall governance of the AWS accounts. The company must be able to calculate costs associated with each project and each environment. Commonly deployed IT services must be centrally managed and business units should be restricted to deploying pre-approved IT services only.

Which combination of actions should be taken to meet these requirements? (Select TWO.)

  1. Use AWS Savings Plans to configure budget thresholds and send alerts to management.
  2. Apply environment, cost center, and application name tags to all resources that accept tags.
  3. Use Amazon CloudWatch to create a billing alarm that notifies managers when a billing threshold is reached or exceeded.
  4. Configure custom budgets and define thresholds using AWS Cost Explorer.
  5. Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates.
A
  1. Use AWS Savings Plans to configure budget thresholds and send alerts to management.
  2. Apply environment, cost center, and application name tags to all resources that accept tags.
  3. Use Amazon CloudWatch to create a billing alarm that notifies managers when a billing threshold is reached or exceeded.
  4. Configure custom budgets and define thresholds using AWS Cost Explorer.
  5. Create an AWS Service Catalog portfolio for each business unit and add products to the portfolios using AWS CloudFormation templates.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

A Solutions Architect has been asked to implement a disaster recovery (DR) site for an eCommerce platform that is growing at an increasing rate. The platform runs on Amazon EC2 web servers behind Elastic Load Balancers, images stored in Amazon S3 and Amazon DynamoDB tables that store product and customer data. The DR site should be located in a separate AWS Region.

Which combinations of actions should the Solutions Architect take to implement the DR site? (Select THREE.)

  1. Enable versioning on the amazon S3 buckets and enable cross-Region snapshots.
  2. Enable DynamoDB global tables to achieve multi-Region table replication.
  3. Enable Amazon Route S3 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue.
  4. Enable Amazon S3 cross-Region replication on the buckets that contain images.
  5. Enable multi-Region targets on the Elastic Load Balancer and target Amazon EC2 instances in both Regions.
  6. Enable DynamoDB Streams and use an event-source mapping to a Lambda function which populates a table in the second Region.
A
  1. Enable versioning on the amazon S3 buckets and enable cross-Region snapshots.
  2. Enable DynamoDB global tables to achieve multi-Region table replication.
  3. Enable Amazon Route S3 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue.
  4. Enable Amazon S3 cross-Region replication on the buckets that contain images.
  5. Enable multi-Region targets on the Elastic Load Balancer and target Amazon EC2 instances in both Regions.
  6. Enable DynamoDB Streams and use an event-source mapping to a Lambda function which populates a table in the second Region.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

A financial company processes transactions using on-premises application servers which save output to an Amazon DynamoDB table. The company’s data center is connected to AWS using an AWS Direct Connect (DX) connection. Company management has mandated that the solution should be available across multiple Regions. Consistent network performance must be maintained at all times.

What changes should the company make to meet these requirements?

  1. Create a DX connection to a second AWS Region. Use DynamoDB global tables to replicate data to the second Region. Modify the application to fail over to the second Region.
  2. Create a DX connection to a second AWS Region. Create an identical DynamoDB table in the second Region. Enable DynamoDB auto scaling to manage throughput capacity. Modify the application to write to the second Region.
  3. Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS DMS to synchronize data to the copied table.
  4. Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS Lambda to synchronize data to the copied table.
A
  1. Create a DX connection to a second AWS Region. Use DynamoDB global tables to replicate data to the second Region. Modify the application to fail over to the second Region.
  2. Create a DX connection to a second AWS Region. Create an identical DynamoDB table in the second Region. Enable DynamoDB auto scaling to manage throughput capacity. Modify the application to write to the second Region.
  3. Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS DMS to synchronize data to the copied table.
  4. Use an AWS managed VPN to connect to a second AWS Region. Create a copy of the DynamoDB table in the second Region. Enable DynamoDB streams in the primary Region and use AWS Lambda to synchronize data to the copied table.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

A Solutions Architect is helping to standardize a company’s method of deploying applications to AWS using AWS CodePipeline and AWS CloudFormation. A group of developers create applications using JavaScript and TypeScript and they are concerned about needing to learn new domain-specific languages. They are also reluctant to lose access to features of the existing languages such as looping.

How can the Solutions Architect address the developers concerns and quickly bring the applications up to deployment standards?

  1. Define the AWS resources using JavaScript or TypeScript. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates from the developer’s code and use the AWS CDK to create Cloud Formation stacks. Incorporate the AWS CDK as a CodeBuild job in CodePipeline.
  2. Use a third-party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes. Orchestrate the CodeBuild job using CodePipeline and use CloudFormation for deployment.
  3. Use AWS SAM and specify a serverless transform. Add the JavaScript and Typescript code as metadata to the template file Use AWS CodeBuild to build the code and output a CloudFormation template.
  4. Create CloudFormation templates and re-use para of the JavaScript and Typescript code as Instance user data. Use the AWS Cloud Development Kit (AWS CDK) to deploy the application using these templates. Incorporate the AWS CDK into CodePipeline and deploy the application to AWS using these templates.
A
  1. Define the AWS resources using JavaScript or TypeScript. Use the AWS Cloud Development Kit (AWS CDK) to create CloudFormation templates from the developer’s code and use the AWS CDK to create Cloud Formation stacks. Incorporate the AWS CDK as a CodeBuild job in CodePipeline.
  2. Use a third-party resource provisioning engine inside AWS CodeBuild to standardize the deployment processes. Orchestrate the CodeBuild job using CodePipeline and use CloudFormation for deployment.
  3. Use AWS SAM and specify a serverless transform. Add the JavaScript and Typescript code as metadata to the template file Use AWS CodeBuild to build the code and output a CloudFormation template.
  4. Create CloudFormation templates and re-use para of the JavaScript and Typescript code as Instance user data. Use the AWS Cloud Development Kit (AWS CDK) to deploy the application using these templates. Incorporate the AWS CDK into CodePipeline and deploy the application to AWS using these templates.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

A company runs its IT services from an on-premises data center and is moving to AWS. The company wants to move their development and deployment processes to use managed services where possible. They would like to leverage their existing Chef tools and experience. The application must be deployed to a staging environment and then to production. The ability to roll back quickly must be available in case issues occur following a production deployment.

Which AWS service and deployment strategy should a Solutions Architect use to meet the company’s requirements?

  1. Use AWS OpsWorks and deploy the application using a canary deployment strategy.
  2. Use AWS CodeDeploy and deploy the application using an in-place update deployment strategy.
  3. Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.
  4. Use AWS Elastic Beanstalk and deploy the application using a rolling update deployment strategy.
A
  1. Use AWS OpsWorks and deploy the application using a canary deployment strategy.
  2. Use AWS CodeDeploy and deploy the application using an in-place update deployment strategy.
  3. Use AWS OpsWorks and deploy the application using a blue/green deployment strategy.
  4. Use AWS Elastic Beanstalk and deploy the application using a rolling update deployment strategy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

A company has experienced issues updating an AWS Lambda function that is deployed using an AWS CloudFormation stack. The issues have resulted in outages that affected large numbers of customers. A Solutions Architect must adjust the deployment process to support a canary release strategy. Invocation traffic should be routed based on specified weights.

Which solution will meet these requirements?

  1. Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.
  2. Use AWS CodeDeploy to deploy using the CodeDeployDefault.HalfAtATime deployment configuration to distribute the load.
  3. Create an alias for new versions of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
  4. Create a version for every new update to the Lambda function code. Use the AWS CLI update-function-configuration command with the routing-config parameter to distribute the load.
A
  1. Deploy the application into a new CloudFormation stack. Use an Amazon Route 53 weighted routing policy to distribute the load.
  2. Use AWS CodeDeploy to deploy using the CodeDeployDefault.HalfAtATime deployment configuration to distribute the load.
  3. Create an alias for new versions of the Lambda function. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
  4. Create a version for every new update to the Lambda function code. Use the AWS CLI update-function-configuration command with the routing-config parameter to distribute the load.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

A fintech company runs an on-premises environment that ingests data feeds from financial services companies, transforms the data, and then sends it to an on-premises Apache Kafka cluster. The company plans to use AWS services to build a scalable, near real-time solution that offers consistent network performance to provide the data feeds to a web application. Which steps should a Solutions Architect take to build the solution? (Select THREE.)

  1. Establish a Site-to-Site VPN from the on-premises data center to AWS.
  2. Create a GraphQL API in AWS AppSync, create an AWS Lambda function to process the Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
  3. Establish an AWS Direct Connect connection from the on premises data center to AWS.
  4. Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
  5. Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Consumer Library to put the data into an Amazon Kinesis data stream.
  6. Create an Amazon EC? Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library to put the data into a Kinesis data stream.
A
  1. Establish a Site-to-Site VPN from the on-premises data center to AWS.
  2. Create a GraphQL API in AWS AppSync, create an AWS Lambda function to process the Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
  3. Establish an AWS Direct Connect connection from the on premises data center to AWS.
  4. Create a WebSocket API in Amazon API Gateway, create an AWS Lambda function to process an Amazon Kinesis data stream, and use the @connections command to send callback messages to connected clients.
  5. Create an Amazon EC2 Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Consumer Library to put the data into an Amazon Kinesis data stream.
  6. Create an Amazon EC? Auto Scaling group to pull the messages from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library to put the data into a Kinesis data stream.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

A new application will ingest millions of records per minute from user devices all over the world. Each record is less than 4 KB in size and must be stored durably and accessed with low latency. The data must be stored for 90 days after which it can be deleted. It has been estimated that storage requirements for a year will be 15-20TB.

Which storage strategy is the MOST cost-effective and meets the design requirements?

  1. Store each incoming record as a single .csv file in an Amazon S3 bucket. Configure a lifecycle policy to delete data older than 90 days.
  2. Store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that executes a query to delete any records older than 90 days.
  3. Store each incoming record in an Amazon DynamoDB table. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 90 days.
  4. Store the records in an Amazon Kinesis Data Stream. Configure the Time to Live (TTL) feature to delete records older than 90 days.
A
  1. Store each incoming record as a single .csv file in an Amazon S3 bucket. Configure a lifecycle policy to delete data older than 90 days.
  2. Store each incoming record in a single table in an Amazon RDS MySQL database. Run a nightly cron job that executes a query to delete any records older than 90 days.
  3. Store each incoming record in an Amazon DynamoDB table. Configure the DynamoDB Time to Live (TTL) feature to delete records older than 90 days.
  4. Store the records in an Amazon Kinesis Data Stream. Configure the Time to Live (TTL) feature to delete records older than 90 days.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

A company has deployed a high performance computing (HPC) cluster in an Amazon VPC. The cluster runs a tightly coupled workload that generates a large number of shared files that are stored in an Amazon EFS file system. The cluster has grown to over 800 instances and the performance has degraded to a problematic level.

A Solutions Architect needs to make some changes to the design to improve the overall performance. Which of the following changes should the Solutions Architect make? (Select THREE.)

  1. Enable an Elastic Fabric Adapter (EFA) on a supported EC2 instance type.
  2. Attach multiple elastic network interfaces (ENI) to reduce latency.
  3. Ensure the cluster is launched across multiple Availability Zones.
  4. Replace Amazon EFS with Amazon FSx for Lustre.
  5. Ensure the HPC duster is launched within a single Availability Zone.
  6. Replace Amazon EFS with multiple FXs for Windows File Server.
A
  1. Enable an Elastic Fabric Adapter (EFA) on a supported EC2 instance type.
  2. Attach multiple elastic network interfaces (ENI) to reduce latency.
  3. Ensure the cluster is launched across multiple Availability Zones.
  4. Replace Amazon EFS with Amazon FSx for Lustre.
  5. Ensure the HPC duster is launched within a single Availability Zone.
  6. Replace Amazon EFS with multiple FXs for Windows File Server.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

A company offers a photo sharing application to its users through a social networking app. To ensure images can be displayed with consistency, a single Amazon EC2 instance running JavaScript code processes the photos and stores the processed images in an Amazon S3 bucket. A front-end application runs from a static website in another S3 bucket and loads the processed images for display in the app.

The company has asked a Solutions Architect to make some recommendations for a cost-effective solution that offers massive scalability for a global user base.

Which combination of changes should the Solutions Architect recommend? (Select TWO.)

  1. Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.
  2. Place the image processing EC2 instance into an Auto Scaling group.
  3. Create an Amazon CloudFront distribution in front of the processed images bucket
  4. Replace the EC2 instance with AWS Lambda to run the image processing tasks.
  5. Replace the EC2 instance with Amazon Rekognition for image processing.
A
  1. Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.
  2. Place the image processing EC2 instance into an Auto Scaling group.
  3. Create an Amazon CloudFront distribution in front of the processed images bucket
  4. Replace the EC2 instance with AWS Lambda to run the image processing tasks.
  5. Replace the EC2 instance with Amazon Rekognition for image processing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

A company requires federated access to AWS for users of a mobile application. The security team has mandated that the application must use a custom-built solution for authenticating users and use IAM roles for authorization.

Which of the following actions would enable authentication and authorization and satisfy the requirements? (Select TWO.)

  1. Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the lAM identity provider.
  2. Use a custom-built SAML-compatible solution for authentication and use AWS SSO for authorization.
  3. Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.
  4. Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Use a token-based Lambda authorizer that uses JWT.
  5. Use a custom-built OpeniD Connect-compatible solution with AWS SSO for authentication and authorization.
A
  1. Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the lAM identity provider.
  2. Use a custom-built SAML-compatible solution for authentication and use AWS SSO for authorization.
  3. Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.
  4. Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Use a token-based Lambda authorizer that uses JWT.
  5. Use a custom-built OpeniD Connect-compatible solution with AWS SSO for authentication and authorization.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

An eCommerce company runs a successful website with a growing base of customers. The website is becoming popular internationally and demand is increasing quickly. The website is currently hosted in an on-premises data center with web servers and a MySQL database. The company plans to migrate the workloads to AWS. A Solutions Architect has been asked to create a solution that:

- Improves security

- Improves reliability

- Improves availability

- Reduces latency

- Reduces maintenance

Which combination of steps should the Solutions Architect take to meet these requirements? (Select THREE.)

  1. Host static website content in Amazon S3. Use 53 Transfer Acceleration to reduce latency while serving web pages. Use AWS WAF to improve website security.
  2. Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving web pages. Use AWS WAF to improve website security.
  3. Launch Amazon EC2 instances in two Availability Zones to host a highly available MySOL database cluster.
  4. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.
  5. Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer.
  6. Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ.
A
  1. Host static website content in Amazon S3. Use 53 Transfer Acceleration to reduce latency while serving web pages. Use AWS WAF to improve website security.
  2. Host static website content in Amazon S3. Use Amazon CloudFront to reduce latency while serving web pages. Use AWS WAF to improve website security.
  3. Launch Amazon EC2 instances in two Availability Zones to host a highly available MySOL database cluster.
  4. Migrate the database to a single-AZ Amazon RDS for MySQL DB instance.
  5. Create an Auto Scaling group of Amazon EC2 instances in two Availability Zones and attach an Application Load Balancer.
  6. Migrate the database to an Amazon Aurora MySQL DB cluster configured for Multi-AZ.
62
Q

An advertising company hosts static content in an Amazon S3 bucket that is served by Amazon CloudFront. The static content is generated programmatically from a Development account, and the S3 bucket and CloudFront are in a Production account. The build pipeline uploads the files to Amazon S3 using an IAM role in the Development Account. The S3 bucket has a bucket policy that only allows CloudFront to read objects using an origin access identity (OAI). During testing all attempts to upload objects using the to the S3 bucket are denied..

How can a Solutions Architect resolve this issue and allow the objects to be uploaded to Amazon S3?

  1. Modify the 53 upload process in the Development account to add the bucket-owner- full-control ACL to the objects at upload.
  2. Create a new lAM role in the Development account with read access to the S3 bucket. Configure 53 to use this new role as its OAI. Modify the build pipeline to assume this role when uploading files from the Development Account
  3. Create a new cross-account lAM role in the Production account with write access to the S3 bucket Modify the build pipeline to assume this role to upload the files to the Production Account
  4. Modify the 53 upload process in the Development account to set the object owner to the Production Account.
A
  1. Modify the 53 upload process in the Development account to add the bucket-owner- full-control ACL to the objects at upload.
  2. Create a new lAM role in the Development account with read access to the S3 bucket. Configure 53 to use this new role as its OAI. Modify the build pipeline to assume this role when uploading files from the Development Account
  3. Create a new cross-account lAM role in the Production account with write access to the S3 bucket Modify the build pipeline to assume this role to upload the files to the Production Account
  4. Modify the 53 upload process in the Development account to set the object owner to the Production Account.
63
Q

A company runs an application in an on-premises data center that uses an IBM DB2 database. The web application calls an API that runs stored procedures on the database to retrieve read-only data. The dataset is constantly updated. Users have reported significant latency when attempting to retrieve data. The company is concerned about DB2 CPU licensing costs and the performance of the database.

Which approach should a Solutions Architect take to migrate to AWS and resolve these concerns?

  1. Use local storage to cache query output. Use 53 COPY commands to sync the dataset to Amazon S3. Refactor the API to use Amazon EFS. Implement Amazon API Gateway and enable API caching.
  2. Rehost the DB2 database to an Amazon EC2 instance. Migrate all the data. Enable caching using an instance store. Refactor the API to use the Amazon EC2 DB2 database. Implement Amazon API Gateway and enable API caching.
  3. Export data on a daily basis and upload to Amazon S3. Refactor the API to use the S3 data. Implement Amazon API Gateway and enable API caching.
  4. Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data. Implement the refactored API in Amazon API Gateway and enable API caching.
A
  1. Use local storage to cache query output. Use 53 COPY commands to sync the dataset to Amazon S3. Refactor the API to use Amazon EFS. Implement Amazon API Gateway and enable API caching.
  2. Rehost the DB2 database to an Amazon EC2 instance. Migrate all the data. Enable caching using an instance store. Refactor the API to use the Amazon EC2 DB2 database. Implement Amazon API Gateway and enable API caching.
  3. Export data on a daily basis and upload to Amazon S3. Refactor the API to use the S3 data. Implement Amazon API Gateway and enable API caching.
  4. Use AWS DMS to migrate data to Amazon DynamoDB using a continuous replication task. Refactor the API to use the DynamoDB data. Implement the refactored API in Amazon API Gateway and enable API caching.
64
Q

A Solutions Architect is working on refactoring a monolithic application into a modern application design that will be deployed in the AWS Cloud. A CI/CD pipeline should be used that supports the modern design and allows for multiple releases every hour. The pipeline should also ensure that changes can be quickly rolled back if required.

Which design will meet these requirements?

  1. Deploy a Cl/CD pipeline that incorporates AMIs to contain the application and their configurations. Deploy the application by replacing Amazon EC2 instances.
  2. Use AWS Elastic Beanstalk and create a secondary environment configured as a deployment target for the Cl/CD pipeline. To deploy, swap the staging and production environment URLs.
  3. Use AWS CloudFormation StackSets to create production and staging stacks. Update the staging stack and use Amazon Route 53 weighted routing to point to the StackSet endpoint address.
  4. Package updates into an Amazon EC2 AMI and update the Auto Scaling group to use the new AMI. Terminate existing instances in a staged approach to cause launches using the new AMI.
A
  1. Deploy a Cl/CD pipeline that incorporates AMIs to contain the application and their configurations. Deploy the application by replacing Amazon EC2 instances.
  2. Use AWS Elastic Beanstalk and create a secondary environment configured as a deployment target for the Cl/CD pipeline. To deploy, swap the staging and production environment URLs.
  3. Use AWS CloudFormation StackSets to create production and staging stacks. Update the staging stack and use Amazon Route 53 weighted routing to point to the StackSet endpoint address.
  4. Package updates into an Amazon EC2 AMI and update the Auto Scaling group to use the new AMI. Terminate existing instances in a staged approach to cause launches using the new AMI.
65
Q

A company is planning to migrate on-premises resources to AWS. The resources include over 150 virtual machines (VMs) that use around 50 TB of storage. Most VMs can be taken offline outside of business hours, however, a few are mission critical and downtime must be minimized. The company’s internet bandwidth is fully utilized and cannot currently be increased. A Solutions Architect must design a migration strategy that can be completed within the next 3 months.

Which method would fulfill these requirements?

  1. Set up a 1 Gbps AWS Direct Connect connection. Then, provision a private virtual interface, and use AWS Server Migration Service (SMS) to migrate the VMs into Amazon EC2.
  2. Use an AWS Storage Gateway file gateway. Mount the file gateway and synchronize the VM file systems to cloud storage. Use the VM Import/Export to import from cloud storage to Amazon EC2.
  3. Export the VMs locally, beginning with the most mission-critical servers first Use Amazon S3 Transfer Acceleration to quickly upload each VM to Amazon 53 after they are exported. Use VM Import/Export to import the VMs into Amazon EC2.
  4. Migrate mission-critical VMs with AWS SMS. Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball. Use VM Import/Export to import the VMs into Amazon EC2.
A
  1. Set up a 1 Gbps AWS Direct Connect connection. Then, provision a private virtual interface, and use AWS Server Migration Service (SMS) to migrate the VMs into Amazon EC2.
  2. Use an AWS Storage Gateway file gateway. Mount the file gateway and synchronize the VM file systems to cloud storage. Use the VM Import/Export to import from cloud storage to Amazon EC2.
  3. Export the VMs locally, beginning with the most mission-critical servers first Use Amazon S3 Transfer Acceleration to quickly upload each VM to Amazon 53 after they are exported. Use VM Import/Export to import the VMs into Amazon EC2.
  4. Migrate mission-critical VMs with AWS SMS. Export the other VMs locally and transfer them to Amazon S3 using AWS Snowball. Use VM Import/Export to import the VMs into Amazon EC2.
66
Q

A security team uses a ticketing system to capture suspicious events that require investigation. The security team has created a system where events are captured using CloudTrail Logs and saved to Amazon S3. A scheduled AWS Lambda function then uses Amazon Athena to query the logs for any API actions performed by the root user. The results are then submitted to the ticketing system by the Lambda function.

The ticketing system has a monthly 4-hour maintenance window when the system is offline and cannot log new tickets and an audit revealed that several tickets were not created due to the ticketing system being unavailable.

Which combination of steps should a solutions architect take to ensure that the incidents are reported to the ticketing system even during planned maintenance? (Select TWO.)

  1. Create an Amazon SNS topic to which Amazon CloudWatch alarms will be published. Configure a CloudWatch alarm to invoke the Lambda function.
  2. Update the Lambda function to be triggered by messages published to an Amazon SNS topic. Update the existing application code to retry every 5 minutes if the ticketing systems API endpoint is unavailable.
  3. Update the Lambda function to poll the Amazon SOS queue for messages and to return successfully when the ticketing system API has processed the request.
  4. Create an Amazon SOS queue to which Amazon CloudWatch alarms will be published. Configure a CloudWatch alarm to publish to the SOS queue.
  5. Create an Amazon Eventßridge rule with a pattern that looks for AWS CloudTrail events where the API calls involve the root user account. Configure an Amazon SQS queue as a target for the rule.
A
  1. Create an Amazon SNS topic to which Amazon CloudWatch alarms will be published. Configure a CloudWatch alarm to invoke the Lambda function.
  2. Update the Lambda function to be triggered by messages published to an Amazon SNS topic. Update the existing application code to retry every 5 minutes if the ticketing systems API endpoint is unavailable.
  3. Update the Lambda function to poll the Amazon SOS queue for messages and to return successfully when the ticketing system API has processed the request.
  4. Create an Amazon SOS queue to which Amazon CloudWatch alarms will be published. Configure a CloudWatch alarm to publish to the SOS queue.
  5. Create an Amazon Eventßridge rule with a pattern that looks for AWS CloudTrail events where the API calls involve the root user account. Configure an Amazon SQS queue as a target for the rule.
67
Q

A university is running computational algorithms that require large amounts of compute power. The algorithms are being run using a high-performance compute cluster on Amazon EC2 Spot instances. Each time an instance launches a DNS record must be created in an Amazon Route 53 private hosted zone. When the instance is terminated the DNS record must be deleted.

The current configuration uses an Amazon CloudWatch Events rule that triggers an AWS Lambda function to create the DNS record. When scaling the solution to thousands of instances the university has experienced “HTTP 400 error (Bad request)” errors in the Lambda logs. The response header also includes a status code element with a value of “Throttling” and a status message element with a value of “Rate exceeded”.

Which combination of steps should the Solutions Architect take to resolve these issues? (Select THREE.)

  1. Configure a Lambda function to retrieve messages from an Amazon SQS queue. Modify the Lambda function to retrieve a maximum of 10 messages then batch the messages by Amazon Route 53 API call type and submit. Delete the messages from the SQS queue after successful API calls.
  2. Configure an Amazon SQS FIFO queue and configure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule.
  3. Update the CloudWatch Events rule to trigger on Amazon EC2 “Instance Launch Successful’’ and “Instance Terminate Successful” events for the Auto Scaling group used by the cluster.
  4. Configure an Amazon SQS standard queue and configure the existing CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule.
  5. Configure an Amazon Kinesis data stream and configure a CloudWatch Events mie to use this queue as a target Remove the Lambda target from the CloudWatch Events rule.
  6. Configure a Lambda function to read data from the Amazon Kinesis data stream and configure the batch window to 5 minutes. Modify the function to make a single API call to Amazon Route 53 with all records read from the Kinesis data stream.
A
  1. Configure a Lambda function to retrieve messages from an Amazon SQS queue. Modify the Lambda function to retrieve a maximum of 10 messages then batch the messages by Amazon Route 53 API call type and submit. Delete the messages from the SQS queue after successful API calls.
  2. Configure an Amazon SQS FIFO queue and configure a CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule.
  3. Update the CloudWatch Events rule to trigger on Amazon EC2 “Instance Launch Successful’’ and “Instance Terminate Successful” events for the Auto Scaling group used by the cluster.
  4. Configure an Amazon SQS standard queue and configure the existing CloudWatch Events rule to use this queue as a target. Remove the Lambda target from the CloudWatch Events rule.
  5. Configure an Amazon Kinesis data stream and configure a CloudWatch Events mie to use this queue as a target Remove the Lambda target from the CloudWatch Events rule.
  6. Configure a Lambda function to read data from the Amazon Kinesis data stream and configure the batch window to 5 minutes. Modify the function to make a single API call to Amazon Route 53 with all records read from the Kinesis data stream.
68
Q

A company runs Docker containers on Amazon ECS. A containerized application uses a custom tool that must be manually updated each time the container code is updated. The updated container image can then be used for new tasks. A Solutions Architect has been tasked with automating this process to eliminate the manual work and ensure a new container image is generated each time the tool code is updated.

Which combination of actions should the Solutions Architect take to meet these requirements? (Select THREE.)

  1. Create an AWS CodeDeploy application that pulls the latest container image from Amazon ECR. updates the container with code from the source AWS CodeCommit repository. and pushes the updated container image to Amazon ECR.
  2. Create an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeDeploy application update.
  3. Create an AWS Codeßuild project that pulls the latest container image from Amazon ECR. updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR.
  4. Create an Amazon Eventßridge rule that triggers on commits to the AWS CodeCommit repository for the image. Configure the event to trigger an update to the image in Amazon ECR. Push the updated container image to Amazon ECR.
  5. Create an Amazon ECR repository for the image. Create an AWS CodeCommit repository containing code for the tool being deployed to the container image in Amazon ECR.
  6. Create an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeBuild build.
A
  1. Create an AWS CodeDeploy application that pulls the latest container image from Amazon ECR. updates the container with code from the source AWS CodeCommit repository. and pushes the updated container image to Amazon ECR.
  2. Create an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeDeploy application update.
  3. Create an AWS Codeßuild project that pulls the latest container image from Amazon ECR. updates the container with code from the source AWS CodeCommit repository, and pushes the updated container image to Amazon ECR.
  4. Create an Amazon Eventßridge rule that triggers on commits to the AWS CodeCommit repository for the image. Configure the event to trigger an update to the image in Amazon ECR. Push the updated container image to Amazon ECR.
  5. Create an Amazon ECR repository for the image. Create an AWS CodeCommit repository containing code for the tool being deployed to the container image in Amazon ECR.
  6. Create an AWS CodePipeline pipeline that sources the tool code from the AWS CodeCommit repository and initiates an AWS CodeBuild build.
69
Q

A company is in the process of migrating applications to AWS using multiple accounts in AWS Organizations . The management account is at the root of the Organization’s hierarchy. Business units each have different accounts and requirements for the services they need to use. The security team needs to implement controls across all accounts to prohibit many AWS services. In some cases a business unit may have a valid exception to these controls and this must be achievable.

Which solution will meet these requirements with minimal optional overhead?

  1. Use an SCP in Organizations to implement a deny list of AWS services. Apply this SCP at the root level and each OU. Remove the default AWS managed SCP from the root level and all OU levels. For any specific exceptions, modify the SCP attached to that OU, and add the required AWS services to the allow list.
  2. Use an SCP in Organizations to implement a deny list of AWS services. Apply this SCP at the root level. For any specific exceptions for an OU, create a new SCP for that OU and add the required AWS services to the allow list.
  3. Use an SCP in Organizations to implement a deny list of AWS services. Apply this SCP at each OU level. Leave the default AWS managed SCP attached to the root level and all OUs. For accounts that require specific exceptions. create an OU under root and attach an SCP that denies fewer services.
  4. Use an SCP in Organizations to implement an allow list of AWS services. Apply this SCP at the root level. Remove the default AWS managed SCP from the root level and all OU levels. For any specific exceptions for an OU. modify the SCP attached to that OU. and add the required AWS services to the allow list
A
  1. Use an SCP in Organizations to implement a deny list of AWS services. Apply this SCP at the root level and each OU. Remove the default AWS managed SCP from the root level and all OU levels. For any specific exceptions, modify the SCP attached to that OU, and add the required AWS services to the allow list.
  2. Use an SCP in Organizations to implement a deny list of AWS services. Apply this SCP at the root level. For any specific exceptions for an OU, create a new SCP for that OU and add the required AWS services to the allow list.
  3. Use an SCP in Organizations to implement a deny list of AWS services. Apply this SCP at each OU level. Leave the default AWS managed SCP attached to the root level and all OUs. For accounts that require specific exceptions. create an OU under root and attach an SCP that denies fewer services.
  4. Use an SCP in Organizations to implement an allow list of AWS services. Apply this SCP at the root level. Remove the default AWS managed SCP from the root level and all OU levels. For any specific exceptions for an OU. modify the SCP attached to that OU. and add the required AWS services to the allow list
70
Q

A web application allows users to upload video clips of celebrities. The website consists of Amazon EC2 instances and static content. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for facial analysis. The image processing jobs are picked up from an Amazon SQS queue by an Auto Scaling layer of EC2 instances.

A Solutions Architect has been asked to re-architect the application to reduce operational overhead using AWS managed services where possible. Which of the following recommendations should the Solutions Architect make?

  1. Store the uploaded videos in Amazon EFS and mount the file system to the EC2 instances for the web application. Process the queue with an AWS Lambda function that calls the Amazon Rekognition API to perform facial analysis.
  2. Use an Amazon S3 static website for the web application. Store uploaded videos in an S3 bucket Use S3 event notification to publish events to the SOS queue. Proœss the queue with Amazon ECS tasks using the Fargate launch type for running the custom recognition software.
  3. Use an Amazon S3 static website for the web application. Store uploaded videos in an S3 bucket Use S3 event notification to publish events to the SOS queue. Process the queue with Amazon ECS tasks using the EC2 launch type for running the custom recognition software.
  4. Use an Amazon S3 static website for the web application. Store uploaded videos in an S3 bucket. Use S3 event notification to publish events to the SOS queue. Process the queue with an AWS Lambda function that calls the Amazon Rekognition API to perform facial analysis.
A
  1. Store the uploaded videos in Amazon EFS and mount the file system to the EC2 instances for the web application. Process the queue with an AWS Lambda function that calls the Amazon Rekognition API to perform facial analysis.
  2. Use an Amazon S3 static website for the web application. Store uploaded videos in an S3 bucket Use S3 event notification to publish events to the SOS queue. Proœss the queue with Amazon ECS tasks using the Fargate launch type for running the custom recognition software.
  3. Use an Amazon S3 static website for the web application. Store uploaded videos in an S3 bucket Use S3 event notification to publish events to the SOS queue. Process the queue with Amazon ECS tasks using the EC2 launch type for running the custom recognition software.
  4. Use an Amazon S3 static website for the web application. Store uploaded videos in an S3 bucket. Use S3 event notification to publish events to the SOS queue. Process the queue with an AWS Lambda function that calls the Amazon Rekognition API to perform facial analysis.
71
Q

An application uses Amazon EC2 instances in an Auto Scaling group and an Amazon RDS MySQL database. The web application has occasional spikes of traffic during the day. The operations team have determined the most appropriate instance sizes for both the EC2 instances and the DB instance. All instances use On-Demand pricing.

What of the following steps can be taken to gain the most cost savings without impacting the reliability of the application?

  1. Use Spot instance pricing for the RDS database and the EC2 instances in the Auto Scaling group.
  2. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.
  3. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.
  4. Reserve capacity for all EC2 instances and leverage Spot Instance pricing for the RDS database.
A
  1. Use Spot instance pricing for the RDS database and the EC2 instances in the Auto Scaling group.
  2. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.
  3. Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running.
  4. Reserve capacity for all EC2 instances and leverage Spot Instance pricing for the RDS database.
72
Q

A company plans to migrate a content management system (CMS) to AWS. The CMS will use Amazon CloudFront to ensure optimum performance for users from around the world. The CMS includes both static and dynamic content and has been placed behind an Application Load Balancer (ALB) which is the default origin for the CloudFront distribution. The static assets are served from an Amazon S3 bucket.

When users attempt to access the static assets HTTP status code 404 errors are generated. Which actions should a Solutions Architect take to resolve the issue? (Select TWO.)

  1. Add a CachePolicyConfig to allow HTTP headers to be included in requests to the origin.
  2. Add a behavior to the CloudFront distribution for the path pattern and the origin of the static assets.
  3. Add another origin to the CloudFront distribution for the static assets.
  4. Add a rule to the distribution to forward GET method requests to Amazon S3.
  5. Add a host header condition to the ALB listener and forward the header from CloudFront to add traffic to the allow list
A
  1. Add a CachePolicyConfig to allow HTTP headers to be included in requests to the origin.
  2. Add a behavior to the CloudFront distribution for the path pattern and the origin of the static assets.
  3. Add another origin to the CloudFront distribution for the static assets.
  4. Add a rule to the distribution to forward GET method requests to Amazon S3.
  5. Add a host header condition to the ALB listener and forward the header from CloudFront to add traffic to the allow list
73
Q

A financial services company runs an application that allows traders to perform online simulations of market conditions. The backend runs on a fleet of virtual machines in an on-premises data center and the business logic is exposed using a REST API with multiple functions. The trader’s session data is stored in a NAS file system in the on-premises data center. During busy periods of the day the server capacity is insufficient and latency issues have occurred when fetching the session data for a simulation.

A Solutions Architect must create a design for moving the application to AWS. The design must use the same API model but should be capable of scaling for the variable load and ensure access to session data is provided with low-latency.

Which solutions meet these requirements?

  1. Implement the REST API using a Network Load Balancer (NLB). Run the business logic on an Amazon EC2 instance behind the NLB. Store trader session data in Amazon Aurora Serverless.
  2. Implement the REST API using an Application Load Balancer (ALB). Run the business logic in AWS Lambda. Store trader session data in Amazon DynamoDB with on-demand capacity.
  3. Implement the REST API using AWS AppSync. Run the business logic in AWS Lambda. Store trader session data in Amazon Aurora Serverless.
  4. Implement the REST API using Amazon API Gateway. Run the business logic in AWS Lambda. Store trader session data in Amazon DynamoDB with on-demand capacity.
A
  1. Implement the REST API using a Network Load Balancer (NLB). Run the business logic on an Amazon EC2 instance behind the NLB. Store trader session data in Amazon Aurora Serverless.
  2. Implement the REST API using an Application Load Balancer (ALB). Run the business logic in AWS Lambda. Store trader session data in Amazon DynamoDB with on-demand capacity.
  3. Implement the REST API using AWS AppSync. Run the business logic in AWS Lambda. Store trader session data in Amazon Aurora Serverless.
  4. Implement the REST API using Amazon API Gateway. Run the business logic in AWS Lambda. Store trader session data in Amazon DynamoDB with on-demand capacity.
74
Q

An agricultural company is rolling out thousands of devices that will send environmental data to a data platform. The platform will process and analyze the data and provide information back to researchers. The devices will send 8 KB of data every second and the solution must support near real-time analytics, provide durability for the data, and deliver results to a data warehouse.

Which strategy should a solutions architect use to meet these requirements?

  1. Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis cIients and save the results to an Amazon RDS instance.
  2. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SOS with Kinesis, and save the results to an Amazon Redshift cluster.
  3. Use an Amazon API Gateway to put requests into an Amazon SOS queue. analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR.
  4. Use Amazon Kinesis Data Streams to collect the inbound data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR.
A
  1. Use Amazon Kinesis Data Firehose to collect the inbound sensor data, analyze the data with Kinesis cIients and save the results to an Amazon RDS instance.
  2. Use Amazon S3 to collect the inbound device data, analyze the data from Amazon SOS with Kinesis, and save the results to an Amazon Redshift cluster.
  3. Use an Amazon API Gateway to put requests into an Amazon SOS queue. analyze the data with an AWS Lambda function, and save the results to an Amazon Redshift cluster using Amazon EMR.
  4. Use Amazon Kinesis Data Streams to collect the inbound data, analyze the data with Kinesis clients, and save the results to an Amazon Redshift cluster using Amazon EMR.
75
Q

A company has deployed a web application in an Amazon VPC. A CloudFront distribution is used for both scalability and performance. The operations team has noticed that the cache hit ratio has been dropping over time leading to a gradual degradation of the performance for the web application.

The cache metrics report indicates that query strings on some URLs are inconsistently ordered and are specified in a mixture of mixed-case letters.

Which actions can a Solutions Architect take to increase the cache hit ratio and resolve the performance issues on the web application?

  1. Use AWS WAF to create a WebACL and filter based on the case of the que strings in the URL Configure WAF to trigger an AWS Lambda function that rewrites the URIS to lowercase.
  2. Create a path pattern in the CloudFront distribution that forwards all requests to the origin with case-sensitivity turned off.
  3. Create a Lambda@Edge function to sort parameters by name and force them to be lowercase. Select the CloudFront viewer request trigger to invoke the function.
  4. Update the CloudFront distribution to disable caching based on query string parameters.
A
  1. Use AWS WAF to create a WebACL and filter based on the case of the que strings in the URL Configure WAF to trigger an AWS Lambda function that rewrites the URIS to lowercase.
  2. Create a path pattern in the CloudFront distribution that forwards all requests to the origin with case-sensitivity turned off.
  3. Create a Lambda@Edge function to sort parameters by name and force them to be lowercase. Select the CloudFront viewer request trigger to invoke the function.
  4. Update the CloudFront distribution to disable caching based on query string parameters.
76
Q

A company plans to migrate physical servers and VMs from an on-premises data center to the AWS Cloud using AWS Migration Hub. The VMs run on a combination of VMware and Hyper-V hypervisors. A Solutions Architect must determine the best services for data collection and discovery. The company has also requested the ability to generate reports from the collected data.

Which solution meets these requirements?

  1. Use the AWS Application Discovery Service agent for data collection on physical servers and Hyper-V. Use the AWS Agentless Discovery Connector for data collection on VMware. Store the collected data in Amazon S3. Query the data with Amazon Athena. Generate reports by using Amazon QuickSight.
  2. Use the AWS Application Discovery Service agent for data collection on physical servers and all VMs. Store the collected data in Amazon Elastic File System (Amazon EFS). Query the data and generate reports with Amazon Athena.
  3. Use the AWS Systems Manager agent for data collection on physical servers. Use the AWS Agentless Discovery Connector for data collection on all VMs. Store, query. And generate reports from the collected data by using Amazon Redshift.
  4. Use the AWS Agentless Discovery Connector for data collection on physical servers and all VMs. Store the collected data in Amazon S3. Query the data with S3 Select. Generate reports by using Kibana hosted on Amazon EC2.
A
  1. Use the AWS Application Discovery Service agent for data collection on physical servers and Hyper-V. Use the AWS Agentless Discovery Connector for data collection on VMware. Store the collected data in Amazon S3. Query the data with Amazon Athena. Generate reports by using Amazon QuickSight.
  2. Use the AWS Application Discovery Service agent for data collection on physical servers and all VMs. Store the collected data in Amazon Elastic File System (Amazon EFS). Query the data and generate reports with Amazon Athena.
  3. Use the AWS Systems Manager agent for data collection on physical servers. Use the AWS Agentless Discovery Connector for data collection on all VMs. Store, query. And generate reports from the collected data by using Amazon Redshift.
  4. Use the AWS Agentless Discovery Connector for data collection on physical servers and all VMs. Store the collected data in Amazon S3. Query the data with S3 Select. Generate reports by using Kibana hosted on Amazon EC2.
77
Q

A company uses AWS CodePipeline to manage an application that runs on Amazon EC2 instances in an Auto Scaling group. All AWS resources are defined in CloudFormation templates. Application code is stored in an Amazon S3 bucket and installed at launch time using lifecycle hooks with EventBridge and AWS Lambda. Recent changes in the CloudFormation templates have resulted in issues that have caused outages and management requires a solution to ensure this situation is not repeated.

What should a Solutions Architect do to reduce the likelihood that future changes in the templates will cause downtime?

  1. Use AWS CodeBuild for automated testing. Use CloudFormation change sets to evaluate changes ahead of deployment. Use AWS CodeDeploy to leverage blue/green deployment patterns.
  2. Use AWS Codeßuild to detect and report CloudFormation error conditions when performing deployments. Deploy updates to a separate stack in a test account and use manual test plans to validate the changes.
  3. Use AWS CodeDeploy and a blue/green deployment pattern with CloudFormation to replace the lifecycle hooks. Gather feedback from users to identify issues that may require a rollback.
  4. Move the application code to AWS CodeCommit Use CodeBuild to validate the application code and automate testing. Use CloudFormation StackSets to deploy updates to different environments to leverage a blue/green deployment pattern.
A
  1. Use AWS CodeBuild for automated testing. Use CloudFormation change sets to evaluate changes ahead of deployment. Use AWS CodeDeploy to leverage blue/green deployment patterns.
  2. Use AWS Codeßuild to detect and report CloudFormation error conditions when performing deployments. Deploy updates to a separate stack in a test account and use manual test plans to validate the changes.
  3. Use AWS CodeDeploy and a blue/green deployment pattern with CloudFormation to replace the lifecycle hooks. Gather feedback from users to identify issues that may require a rollback.
  4. Move the application code to AWS CodeCommit Use CodeBuild to validate the application code and automate testing. Use CloudFormation StackSets to deploy updates to different environments to leverage a blue/green deployment pattern.
78
Q

A company runs a single application in an AWS account. The application uses an Auto Scaling Group of Amazon EC2 instances with a combination of Reserved Instances (RIs) and On-Demand instances. To maintain cost-effectiveness the RIs should cover 70% of the workload. The solution should include the ability to alert the DevOps team if coverage drops below the 70% threshold.

Which set of steps should a Solutions Architect take to create the report and alert the DevOps team?

  1. Use the AWS Billing and Cost Management console to create a reservation budget for Rl utilization, set the utilization to 70%. Configure an alert that notifies the DevOps team.
  2. Use AWS Cost Explorer to create a budget for Rl coverage and set the threshold to 70%. Configure an alert that notifies the DevOps team.
  3. Use AWS Budgets to create a budget for Rl coverage and set the threshold to 70% Configure an alert that notifies the DevOps team.
  4. Use AWS Cost Explorer to configure a report for Rl utilization and set the utilization target to 70%. Configure an alert that notifies the DevOps team.
A
  1. Use the AWS Billing and Cost Management console to create a reservation budget for Rl utilization, set the utilization to 70%. Configure an alert that notifies the DevOps team.
  2. Use AWS Cost Explorer to create a budget for Rl coverage and set the threshold to 70%. Configure an alert that notifies the DevOps team.
  3. Use AWS Budgets to create a budget for Rl coverage and set the threshold to 70% Configure an alert that notifies the DevOps team.
  4. Use AWS Cost Explorer to configure a report for Rl utilization and set the utilization target to 70%. Configure an alert that notifies the DevOps team.
79
Q

A company is running a two-tier web-based application in an on-premises data center. The application layer consists of a single server running a stateless application. The application connects to a PostgreSQL database running on a separate server. A Solutions Architect is planning a migration to AWS. The company requires that the application and database layer must be highly available across three availability zones.

Which solution will meet the company’s requirements?

  1. Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind an Application Load Balancer. Create an Amazon Aurora PostgreSQL database in one AZ and add Aurora Replicas in two more AZs.
  2. Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind an Application Load Balancer. Create an Amazon Aurora Global database.
  3. Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind a Network Load Balancer. Create an Amazon RDS Multi-AZ PostgreSQL database in one AZ and standby instances in two more AZs.
  4. Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind a Network Load Balancer. Create an Amazon Aurora PostgreSQL database in one AZ with storage auto scaling enabled.
A
  1. Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind an Application Load Balancer. Create an Amazon Aurora PostgreSQL database in one AZ and add Aurora Replicas in two more AZs.
  2. Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind an Application Load Balancer. Create an Amazon Aurora Global database.
  3. Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind a Network Load Balancer. Create an Amazon RDS Multi-AZ PostgreSQL database in one AZ and standby instances in two more AZs.
  4. Create an Auto Scaling group of Amazon EC2 instances across three availability zones behind a Network Load Balancer. Create an Amazon Aurora PostgreSQL database in one AZ with storage auto scaling enabled.
80
Q

An application currently runs on Amazon EC2 instances in a single Availability Zone. A Solutions Architect has been asked to re-architect the solution to make it highly available and secure. The security team has requested that all inbound requests are filtered for common vulnerability attacks and all rejected requests must be sent to a third-party auditing application.

Which solution meets the high availability and security requirements?

  1. Configure an Application Load Balancer (ALB) and add the EC2 instances as targets. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB name and enable logging with Amazon CloudWatch Logs. Use an AWS Lambda function to frequently push the logs to the third-party auditing application.
  2. Configure a Multi-AZ Auto Scaling group using the application’s AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Create an Amazon Kinesis Data Firehose with a destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the WebACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
  3. Configure a Multi-AZ Auto Scaling group using the applications AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target Use Amazon Inspector to monitor traffic to the ALB and EC2 instances. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB. Use an AWS Lambda function to frequently push the Amazon Inspector report to the third-party auditing application.
  4. Configure an Application Load Balancer (ALB) along with a target group adding the EC2 instances as targets. Create an Amazon Kinesis Data Firehose with the destination of the third-party auditing application. Create a web ACL in WAR Create an AWS WAF using the web ACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
A
  1. Configure an Application Load Balancer (ALB) and add the EC2 instances as targets. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB name and enable logging with Amazon CloudWatch Logs. Use an AWS Lambda function to frequently push the logs to the third-party auditing application.
  2. Configure a Multi-AZ Auto Scaling group using the application’s AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target. Create an Amazon Kinesis Data Firehose with a destination of the third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the WebACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
  3. Configure a Multi-AZ Auto Scaling group using the applications AMI. Create an Application Load Balancer (ALB) and select the previously created Auto Scaling group as the target Use Amazon Inspector to monitor traffic to the ALB and EC2 instances. Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB. Use an AWS Lambda function to frequently push the Amazon Inspector report to the third-party auditing application.
  4. Configure an Application Load Balancer (ALB) along with a target group adding the EC2 instances as targets. Create an Amazon Kinesis Data Firehose with the destination of the third-party auditing application. Create a web ACL in WAR Create an AWS WAF using the web ACL and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the subscriber.
81
Q

A company is running several development projects. Developers are assigned to a single project but move between projects frequently. Each project team requires access to different AWS resources.

Currently, there are projects for serverless, analytics, and database development. The resources used within each project can change over time. Developers require full control over the project they are assigned to and no access to the other projects.

When developers are assigned to a different project or new AWS resources are added, the company wants to minimize policy maintenance.

What type of control policy should a Solutions Architect recommend?

  1. Create an lAM role for each project that requires access to AWS resources. Attach an inline policy document to the role that specifies the lAM users that are allowed to assume the role, with full control of the resources that belong to the project. Update the policy document when the set of resources changes, or developers change projects.
  2. Create a customer managed policy document for each project that requires access to AWS resources. Specify full control of the resources that belong to the project. Attach the project-specific policy document to an lAM group. Change the group membership when developers change projects. Update the policy document when the set of resources changes.
  3. Create a policy document for each project with specific project tags and allow full control of the resources with a matching tag. Attach the project-specific policy document to the lAM role for that project Change the role assigned to the developer’s lAM user when they change projects. Assign a specific project tag to new resources when they are created.
  4. Create a customer managed policy document for each project that requires access to AWS resources. Specify full control of the resources that belong to the project. Attach the project-specific policy document to the developer’s lAM user when they change projects. Update the policy document when the set of resources changes.
A
  1. Create an lAM role for each project that requires access to AWS resources. Attach an inline policy document to the role that specifies the lAM users that are allowed to assume the role, with full control of the resources that belong to the project. Update the policy document when the set of resources changes, or developers change projects.
  2. Create a customer managed policy document for each project that requires access to AWS resources. Specify full control of the resources that belong to the project. Attach the project-specific policy document to an lAM group. Change the group membership when developers change projects. Update the policy document when the set of resources changes.
  3. Create a policy document for each project with specific project tags and allow full control of the resources with a matching tag. Attach the project-specific policy document to the lAM role for that project Change the role assigned to the developer’s lAM user when they change projects. Assign a specific project tag to new resources when they are created.
  4. Create a customer managed policy document for each project that requires access to AWS resources. Specify full control of the resources that belong to the project. Attach the project-specific policy document to the developer’s lAM user when they change projects. Update the policy document when the set of resources changes.
82
Q

A Solutions Architect has deployed a REST API using an Amazon API Gateway Regional endpoint. The API will be consumed by a growing number of US-based companies. Each company will use the API twice each day to get the latest data.

Following the deployment of the API the operations team noticed thousands of requests coming from hundreds of IP addresses around the world. The traffic is believed to be originating from a botnet. The Solutions Architect must secure the API while minimizing cost.

Which approach should the company take to secure its API?

  1. Create an AWS WAF web ACL with a mie to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a resource policy with a request limit and associate it with the API. Configure the API to require an API key on the POST method.
  2. Create an AWS WAF web ACL with a mie to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
  3. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Add a custom header to the CloudFront distribution populated with an API key. Configure the API to require an API key on the GET method.
  4. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAl can execute the GET method.
A
  1. Create an AWS WAF web ACL with a mie to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a resource policy with a request limit and associate it with the API. Configure the API to require an API key on the POST method.
  2. Create an AWS WAF web ACL with a mie to allow access to the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
  3. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Add a custom header to the CloudFront distribution populated with an API key. Configure the API to require an API key on the GET method.
  4. Create an Amazon CloudFront distribution with the API as the origin. Create an AWS WAF web ACL with a rule to block clients that submit more than ten requests per day. Associate the web ACL with the CloudFront distribution. Configure CloudFront with an origin access identity (OAI) and associate it with the distribution. Configure API Gateway to ensure only the OAl can execute the GET method.
83
Q

An S3 endpoint has been created in an Amazon VPC. A staff member assumed an IAM role and attempted to download an object from a bucket using the endpoint. The staff member received the error message “403: Access Denied”. The bucket is encrypted using an AWS KMS key. A Solutions Architect has verified that the staff member assumed the correct IAM role and the role does allow the object to be downloaded. The bucket policy and NACL are also valid.

Which additional step should the Solutions Architect take to troubleshoot this issue?

  1. Check that local firewall rules are not preventing access to the S3 endpoint.
  2. Ensure that blocking all public access has not been enabled in the S3 bucket
  3. Verify that the lAM role has permission to decrypt the referenced KMS key.
  4. Verify that the IAM role has the correct trust relationship configured.
A
  1. Check that local firewall rules are not preventing access to the S3 endpoint.
  2. Ensure that blocking all public access has not been enabled in the S3 bucket
  3. Verify that the lAM role has permission to decrypt the referenced KMS key.
  4. Verify that the IAM role has the correct trust relationship configured.
84
Q

An Amazon RDS database was created with encryption enabled using an AWS managed CMK. The database has been reclassified and no longer requires encryption. How can a Solutions Architect unencrypt the database with the LEAST operational overhead?

  1. Disable encryption by running the CreateDßlnstnace API operation and setting the StorageEncrypted parameter to false.
  2. Create an unencrypted read replica of the encrypted DB instance and then promote the read replica to primary.
  3. Export the data from the DB instance and import the data into an unencrypted DB instance.
  4. Create an unencrypted snapshot of the DB instance and create a new unencrypted DB instance from the snapshot
A
  1. Disable encryption by running the CreateDßlnstnace API operation and setting the StorageEncrypted parameter to false.
  2. Create an unencrypted read replica of the encrypted DB instance and then promote the read replica to primary.
  3. Export the data from the DB instance and import the data into an unencrypted DB instance.
  4. Create an unencrypted snapshot of the DB instance and create a new unencrypted DB instance from the snapshot
85
Q

An online retailer is updating its catalogue of products. The retailer has a dynamic website which uses EC2 instances for web and application servers. The web tier is behind an Application Load Balancer and the application tier stores data in an Amazon Aurora MySQL database. There is additionally a lot of static content and most website traffic is read-only.

The company is expecting a large spike in traffic to the website when the new catalogue is launched and optimal performance is a high priority.

Which combination of steps should a Solutions Architect take to reduce system response times for a global audience? (Select TWO.)

  1. Use logical cross-Region replication to replicate the Aurora MySQL database to a secondary Region. Replace the web servers with Amazon S3. Configure cross-Region replication for the S3 buckets.
  2. Use Amazon Route 53 with a latency-based routing policy. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions.
  3. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions. Setup an AWS Direct Connect connection.
  4. Migrate the database from Amazon Aurora to Amazon RDS for MySOL Replace the web and application tiers with AWS Lambda functions, create an Amazon SQS queue.
  5. Configure an Aurora global database for storage-based cross-Region replication. Use Amazon S3 with cross-Region replication for static content and resources and create Amazon CloudFront distributions.
A
  1. Use logical cross-Region replication to replicate the Aurora MySQL database to a secondary Region. Replace the web servers with Amazon S3. Configure cross-Region replication for the S3 buckets.
  2. Use Amazon Route 53 with a latency-based routing policy. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions.
  3. Create Auto Scaling groups for the web and application tiers and deploy them in multiple global Regions. Setup an AWS Direct Connect connection.
  4. Migrate the database from Amazon Aurora to Amazon RDS for MySOL Replace the web and application tiers with AWS Lambda functions, create an Amazon SQS queue.
  5. Configure an Aurora global database for storage-based cross-Region replication. Use Amazon S3 with cross-Region replication for static content and resources and create Amazon CloudFront distributions.
86
Q

A company runs several IT services in an on-premises data center that is connected to AWS using an AWS Direct Connect (DX) connection. The service data is sensitive and the company uses an IPSec VPN over the FX connection to encrypt data. Security requirements mandate that the data cannot traverse the internet. The company wants to offer the IT services to other companies who use AWS.

Which solution will meet these requirements?

  1. Create a VPC Endpoint Service that accepts TCP traffic and hosts it behind a Network Load Balancer. Enable access to the IT services over the DX connection.
  2. Create a VPC Endpoint Service that accepts HTTP or Hill’S traffic and hosts it behind an Application Load Balancer. Enable access to the IT services over the DX connection.
  3. Configure a mesh of AWS VPN CloudHub iPsec VPN connections between the customer AWS accounts and the service provider AWS account
  4. Attach an internet gateway to the VPC and ensure that network access control and security group mies allow the relevant inbound and outbound traffic.
A
  1. Create a VPC Endpoint Service that accepts TCP traffic and hosts it behind a Network Load Balancer. Enable access to the IT services over the DX connection.
  2. Create a VPC Endpoint Service that accepts HTTP or Hill’S traffic and hosts it behind an Application Load Balancer. Enable access to the IT services over the DX connection.
  3. Configure a mesh of AWS VPN CloudHub iPsec VPN connections between the customer AWS accounts and the service provider AWS account
  4. Attach an internet gateway to the VPC and ensure that network access control and security group mies allow the relevant inbound and outbound traffic.
87
Q

A company leases data center space in a colocation facility and needs to move out before the end of the financial year in 90 days. The company currently runs 150 virtual machines and a NAS device that holds over 50 TB of data. Access patterns for the data are infrequent but when access is required it must be immediate. The VM configurations are highly customized. The company has a 1 Gbps internet connection which is mostly idle and almost completely unused outside of business hours.

Which combination of steps should a Solutions Architect take to migrate the VMs to AWS with minimal downtime and operational impact? (Select TWO.)

  1. Migrate the NAS data to AWS using AWS Storage Gateway.
  2. Launch new Amazon EC2 instances and reinstall all applications.
  3. Copy infrequently accessed data from the NAS using AWS SMS.
  4. Migrate the virtual machines with AWS SMS.
  5. Migrate the NAS data to AWS using AWS Snowball.
A
  1. Migrate the NAS data to AWS using AWS Storage Gateway.
  2. Launch new Amazon EC2 instances and reinstall all applications.
  3. Copy infrequently accessed data from the NAS using AWS SMS.
  4. Migrate the virtual machines with AWS SMS.
  5. Migrate the NAS data to AWS using AWS Snowball.
88
Q

A company is closing an on-premises data center and needs to move some business applications to AWS. There are over 100 applications that run on virtual machines in the data center. The applications are simple PHP, Java, Ruby, and Node.js web applications. The applications are not developed and are not heavily utilized.

A Solutions Architect must determine the best approach to migrate these applications to AWS with the LOWEST operational overhead.

Which method best fits these requirements?

  1. Refactor the applications to Docker containers and deploy them to an Amazon ECS cluster behind an Application Load Balancer.
  2. Use Amazon EBS cross-Region replication to create an AMI for each application, run the AMI on Amazon EC2.
  3. Deploy each application to a single-instance AWS Elastic Beanstalk environment without a load balancer.
  4. Use AWS SMS to create an AMI for each virtual machine, run the AMI on Amazon EC2.
A
  1. Refactor the applications to Docker containers and deploy them to an Amazon ECS cluster behind an Application Load Balancer.
  2. Use Amazon EBS cross-Region replication to create an AMI for each application, run the AMI on Amazon EC2.
  3. Deploy each application to a single-instance AWS Elastic Beanstalk environment without a load balancer.
  4. Use AWS SMS to create an AMI for each virtual machine, run the AMI on Amazon EC2.
89
Q

A Solutions Architect has been tasked with migrating an application to AWS. The application includes a desktop client application and web application. The web application has an uptime SLA of 99.95%. The Solutions Architect must re-architect the application to meet or exceed this SLA.

The application contains a MySQL database running on a single virtual machine. The web application uses multiple virtual machines with a load balancer. Remote users complain about slow load times while using this latency-sensitive application.

The Solutions Architect must minimize changes to the application whilst improving the user experience, minimizing costs, and ensuring the availability requirements are met. Which solutions best meet these requirements?

  1. Migrate the database to an Amazon RDS Aurora MYSQL configuration. Host the web application on an Auto Scaling configuration of Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience.
  2. Migrate the database to an Amazon RDS MySQL Multi-AZ configuration. Host the web application on automatically scaled AWS Fargate containers behind a Network Load Balancer. Use Amazon ElastiCache to improve the user experience.
  3. Migrate the database to a MySQL database in Amazon EC2. Host the web application on automatically scaled Amazon ECS containers behind an Application Load Balancer. Allocate an Amazon WorkSpaces WorkSpace for each end user to improve the user experience.
  4. Migrate the database to an Amazon EMR cluster with at least two nodes. Deploy the web application on automatically scaled Amazon ECS containers behind an Application Load Balancer. Use Amazon CloudFront to improve the user experience.
A
  1. Migrate the database to an Amazon RDS Aurora MYSQL configuration. Host the web application on an Auto Scaling configuration of Amazon EC2 instances behind an Application Load Balancer. Use Amazon AppStream 2.0 to improve the user experience.
  2. Migrate the database to an Amazon RDS MySQL Multi-AZ configuration. Host the web application on automatically scaled AWS Fargate containers behind a Network Load Balancer. Use Amazon ElastiCache to improve the user experience.
  3. Migrate the database to a MySQL database in Amazon EC2. Host the web application on automatically scaled Amazon ECS containers behind an Application Load Balancer. Allocate an Amazon WorkSpaces WorkSpace for each end user to improve the user experience.
  4. Migrate the database to an Amazon EMR cluster with at least two nodes. Deploy the web application on automatically scaled Amazon ECS containers behind an Application Load Balancer. Use Amazon CloudFront to improve the user experience.
90
Q

A company has created a fitness tracking mobile app that uses a serverless REST API. The app consists of an Amazon API Gateway API with a Regional endpoint, AWS Lambda functions and an Amazon Aurora MySQL database cluster. The company recently secured a deal with a sports company to promote the new app which resulted in a significant increase in the number of requests received.

Unfortunately, the increase in traffic resulted in sporadic database memory errors and performance degradation. The traffic included significant numbers of HTTP requests querying the same data in short bursts of traffic during weekends and holidays.

The company needs to improve its ability to support the additional usage while minimizing the increase in costs associated with the solution.

Which strategy meets these requirements?

  1. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
  2. Create usage plans in API Gateway and distribute API keys to clients. Configure metered access to the production stage.
  3. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the cache.
  4. Modify the instance type of the Aurora database duster to use an instance with more memory.
A
  1. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
  2. Create usage plans in API Gateway and distribute API keys to clients. Configure metered access to the production stage.
  3. Implement an Amazon ElastiCache for Redis cache to store the results of the database calls. Modify the Lambda functions to use the cache.
  4. Modify the instance type of the Aurora database duster to use an instance with more memory.
91
Q

A company recently migrated a high-traffic eCommerce website to the AWS Cloud. The website is experiencing strong growth. Developers use a private GitHub repository to manage code and the DevOps team use Jenkins for builds and unit testing.

The Developers need to receive notifications when a build does not work and ensure there is no downtime during deployments. It is also required that any changes to production are seamless for users and can be easily rolled back if a significant issue occurs.

A Solutions Architect is finalizing the design for the environment and will use AWS CodePipeline to manage the build and deployment process. What other steps should be taken to meet the requirements?

  1. Use GitHub webhooks to trigger the CodePipeline pipeline. Use the Jenkins plugin for AWS CodeBuild to conduct unit testing. Send alerts to an Amazon SNS topic for any bad builds. Deploy in a blue/green deployment using AWS CodeDeploy.
  2. Use GitHub websockets to trigger the CodePipeline pipeline. Use AWS X-Ray for unit testing and static code analysis. Send alerts to an Amazon SNS topic for any bad builds. Deploy in a blue/green deployment using AWS CodeDeploy.
  3. Use GitHub websockets to trigger the CodePipeline pipeline. Use the Jenkins plugin for AWS CodeBuild to conduct unit testing. Send alerts to an Amazon SNS topic for any bad builds. Deploy in an in-place, all-at-once deployment configuration using AWS CodeDeploy.
  4. Use GitHub webhooks to trigger the CodePipeline pipeline. Use AWS X-Ray for unit testing and static code analysis. Send alerts to an Amazon SNS topic for any bad builds. Deploy in an in-place, all-at-once deployment configuration using AWS CodeDeploy.
A
  1. Use GitHub webhooks to trigger the CodePipeline pipeline. Use the Jenkins plugin for AWS CodeBuild to conduct unit testing. Send alerts to an Amazon SNS topic for any bad builds. Deploy in a blue/green deployment using AWS CodeDeploy.
  2. Use GitHub websockets to trigger the CodePipeline pipeline. Use AWS X-Ray for unit testing and static code analysis. Send alerts to an Amazon SNS topic for any bad builds. Deploy in a blue/green deployment using AWS CodeDeploy.
  3. Use GitHub websockets to trigger the CodePipeline pipeline. Use the Jenkins plugin for AWS CodeBuild to conduct unit testing. Send alerts to an Amazon SNS topic for any bad builds. Deploy in an in-place, all-at-once deployment configuration using AWS CodeDeploy.
  4. Use GitHub webhooks to trigger the CodePipeline pipeline. Use AWS X-Ray for unit testing and static code analysis. Send alerts to an Amazon SNS topic for any bad builds. Deploy in an in-place, all-at-once deployment configuration using AWS CodeDeploy.
92
Q

A Solutions Architect must enable an AWS CloudHSM M of N access control—also named a quorum authentication mechanism—to allow security officers to make administrative changes to a hardware security module (HSM). The new security policy states that at least two of the four security officers must authorize any administrative changes to CloudHSM. This is the first time this configuration has been setup. Which steps must be taken to enable quorum authentication (Select TWO.)

  1. Edit the cloudhsm_clientcfg document to import a key and register the key for signing.
  2. Use AWS lAM to create a policy that requires a minimum of three crypto officers (COs) to configure the minimum number of approvals required to perform HSM user management operations.
  3. Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and set the Quorum minimum value to two using the setMValue command.
  4. Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and register a key for signing with the registerMofnPubKey command.
  5. Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and get a Quorum token with the getToken command.
A
  1. Edit the cloudhsm_clientcfg document to import a key and register the key for signing.
  2. Use AWS lAM to create a policy that requires a minimum of three crypto officers (COs) to configure the minimum number of approvals required to perform HSM user management operations.
  3. Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and set the Quorum minimum value to two using the setMValue command.
  4. Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and register a key for signing with the registerMofnPubKey command.
  5. Using the cloudhsm_mgmt_util command line tool, enable encrypted communication, login as a CO, and get a Quorum token with the getToken command.
93
Q

A mobile app has become extremely popular with global usage increasing to millions of users. The app allows users to capture and upload funny images of animals and add captions. The current application runs on Amazon EC2 instances with Amazon EFS storage behind an Application Load Balancer. The data access patterns are unpredictable and during peak periods the application has experienced performance issues.

Which changes should a Solutions Architect make to the application architecture to control costs and improve performance?

  1. Create an Amazon CloudFront distribution and place the ALB behind the distribution. Store static content in Amazon 53 in an Infrequent Access storage class.
  2. Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and AWS Lambda for processing the images.
  3. Place AWS Global Accelerator in front of the ALOE Migrate the static content to Amazon FSx for Windows File Server. Use an AWS Lambda function to reduce image size during the migration process.
  4. Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the 53 bucket and the ALB.
A
  1. Create an Amazon CloudFront distribution and place the ALB behind the distribution. Store static content in Amazon 53 in an Infrequent Access storage class.
  2. Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the S3 bucket and AWS Lambda for processing the images.
  3. Place AWS Global Accelerator in front of the ALOE Migrate the static content to Amazon FSx for Windows File Server. Use an AWS Lambda function to reduce image size during the migration process.
  4. Use an Amazon S3 bucket for static images and use the Intelligent Tiering storage class. Use an Amazon CloudFront distribution in front of the 53 bucket and the ALB.
94
Q

A company requires an application in which employees can log expense claims for processing. The expense claims are typically submitted each week on a Friday. The application must store data in a format that will allow the finance team to be able to run end of month reports. The solution should be highly available and must scale seamlessly based on demand.

Which combination of solution options meets these requirements with the LEAST operational overhead? (Select TWO.)

  1. Store the expense claim data in Amazon S3. Use Amazon Athena and Amazon QuickSight to generate the reports using Amazon S3 as the data source.
  2. Deploy the application front end to an Amazon S3 bucket served by Amazon CloudFront. Deploy the application backend using Amazon API Gateway with an AWS Lambda proxy integration.
  3. Deploy the application to Amazon EC2 On-Demand Instances behind an Application Load Balancer. Use Amazon EC2 Auto Scaling and schedule additional capacity ahead of peak usage periods.
  4. Deploy the application to Amazon EC2 On-Demand Instances behind an Application Load Balancer. Use Amazon EC2 Auto Scaling and schedule additional capacity ahead of peak usage periods.
  5. Store the expense claim data in Amazon EMR. Use Amazon QuickSight to generate the reports using Amazon EMR as the data source.
A
  1. Store the expense claim data in Amazon S3. Use Amazon Athena and Amazon QuickSight to generate the reports using Amazon S3 as the data source.
  2. Deploy the application front end to an Amazon S3 bucket served by Amazon CloudFront. Deploy the application backend using Amazon API Gateway with an AWS Lambda proxy integration.
  3. Deploy the application to Amazon EC2 On-Demand Instances behind an Application Load Balancer. Use Amazon EC2 Auto Scaling and schedule additional capacity ahead of peak usage periods.
  4. Deploy the application to Amazon EC2 On-Demand Instances behind an Application Load Balancer. Use Amazon EC2 Auto Scaling and schedule additional capacity ahead of peak usage periods.
  5. Store the expense claim data in Amazon EMR. Use Amazon QuickSight to generate the reports using Amazon EMR as the data source.
95
Q

A company wants to host a web application on AWS. The application will be used by users around the world. A Solutions Architect has been given the following design requirements:

· Allow the retrieval of data from multiple data sources.

· Minimize the cost of API calls.

· Reduce latency for user access.

· Provide user authentication and authorization and implement role-based access control.

· Implement a fully serverless solution.

How can the Solutions Architect meet these requirements?

  1. Use Amazon CloudFront with Amazon EC2 to host the web application. Use Amazon API Gateway to build the application APIs. Use AWS Lambda for custom authentication and authorization. Authorize data access by leveraging lAM roles.
  2. Use Amazon CloudFront with Amazon 53 to host the web application. Use AWS AppSync to build the application APIs. Use Amazon Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AWS AppSync resolvers.
  3. Use Amazon CloudFront with Amazon S3 to host the web application. Use Amazon API Gateway to build the application APIs with AWS Lambda for the custom authorizer. Authorize data access by performing user lookup in AWS Managed Microsoft AD.
  4. Use Amazon CloudFront with Amazon FSx to host the web application. Use AWS AppSync to build the application APIs. Use lAM groups for RBAC. Authorize data access by leveraging lAM groups in AWS AppSync resolvers.
A
  1. Use Amazon CloudFront with Amazon EC2 to host the web application. Use Amazon API Gateway to build the application APIs. Use AWS Lambda for custom authentication and authorization. Authorize data access by leveraging lAM roles.
  2. Use Amazon CloudFront with Amazon 53 to host the web application. Use AWS AppSync to build the application APIs. Use Amazon Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AWS AppSync resolvers.
  3. Use Amazon CloudFront with Amazon S3 to host the web application. Use Amazon API Gateway to build the application APIs with AWS Lambda for the custom authorizer. Authorize data access by performing user lookup in AWS Managed Microsoft AD.
  4. Use Amazon CloudFront with Amazon FSx to host the web application. Use AWS AppSync to build the application APIs. Use lAM groups for RBAC. Authorize data access by leveraging lAM groups in AWS AppSync resolvers.
96
Q

A company is creating a multi-account structure using AWS Organizations. The accounts will include the Management account, Production account, and Development account. The company requires auditing for all API actions across accounts. A Solutions Architect is advising the company on how to configure the accounts. Which of the following recommendations should the Solutions Architect make? (Select TWO.)

  1. Create user accounts in the Production and Development accounts.
  2. Create user accounts in the Management account and use cross-account access to access resn
  3. Enable AWS CloudTrail and keep all Cloudirail trails and logs within each account
  4. Enable AWS CloudTrail and keep all CloudTrail trails and logs in the management account
  5. Create all resoscn in the Management account and grant access to the Production and Development accounts.
A
  1. Create user accounts in the Production and Development accounts.
  2. Create user accounts in the Management account and use cross-account access to access resn
  3. Enable AWS CloudTrail and keep all Cloudirail trails and logs within each account
  4. Enable AWS CloudTrail and keep all CloudTrail trails and logs in the management account
  5. Create all resoscn in the Management account and grant access to the Production and Development accounts.
97
Q

A company runs a two-tier application that uses EBS-backed Amazon EC2 instances in an Auto Scaling group and an Amazon Aurora PostgreSQL database. The company intends to use a pilot light approach for disaster recovery in a different AWS Region. The company has an RTO of 6 hours and an RPO of 24 hours.

Which solution would achieve the requirements with MINIMAL cost?

  1. Use AWS Lambda to create daily EBS and RDS snapshots and copy them to the disaster recovery Region. Use Amazon Route 53 with an active-active failover configuration. Use Amazon EC2 in an Auto Scaling group configured the same as the primary Region.
  2. Use EBS cross-region snapshot copy capability to create snapshots in the disaster recovery (DR) Region. Implement an Aurora Replica in the DR Region. Use Amazon Route 53 with an active-passive failover configuration. Use Amazon EC2 in an Auto Scaling group configured the same as the primary Region.
  3. Use AWS Lambda to create daily EBS snapshots and copy them to the disaster recovery Region. Implement an Aurora Replica in the DR Region. Use Amazon Route 53 with an active-passive failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery Region.
  4. Use EBS and RDS cross-Region snapshot copy capability to create snapshots in the disaster recovery (DR) Region. Use Amazon Route 53 with an active-active failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery Region.
A
  1. Use AWS Lambda to create daily EBS and RDS snapshots and copy them to the disaster recovery Region. Use Amazon Route 53 with an active-active failover configuration. Use Amazon EC2 in an Auto Scaling group configured the same as the primary Region.
  2. Use EBS cross-region snapshot copy capability to create snapshots in the disaster recovery (DR) Region. Implement an Aurora Replica in the DR Region. Use Amazon Route 53 with an active-passive failover configuration. Use Amazon EC2 in an Auto Scaling group configured the same as the primary Region.
  3. Use AWS Lambda to create daily EBS snapshots and copy them to the disaster recovery Region. Implement an Aurora Replica in the DR Region. Use Amazon Route 53 with an active-passive failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery Region.
  4. Use EBS and RDS cross-Region snapshot copy capability to create snapshots in the disaster recovery (DR) Region. Use Amazon Route 53 with an active-active failover configuration. Use Amazon EC2 in an Auto Scaling group with the capacity set to 0 in the disaster recovery Region.
98
Q

A company captures financial transactions in Amazon DynamoDB tables. The security team is concerned about identifying fraudulent behavior and has requested that all changes to items stored in DynamoDB tables must be logged within 30 minutes.

How can a Solutions Architect meet this requirement?

  1. Use AWS CloudTrail to capture all the APIs that change the DynamoDß tables. Send SNS notifications when anomalous behaviors are detected using CloudTrail event filtering.
  2. Use Amazon DynamoDB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notifications when anomalous behaviors are detected.
  3. Copy the DynamoDß tables into Apache Hive tables on Amazon EMR every hour and analyze them for anomalous behaviors. Send Amazon SNS notifications when anomalous behaviors are detected.
  4. Use event patterns in Amazon CloudWatch Events to capture DynamoDB API call events with an AWS Lambda function as a target to analyze behavior. Send SNS notifications when anomalous behaviors are detected.
A
  1. Use AWS CloudTrail to capture all the APIs that change the DynamoDß tables. Send SNS notifications when anomalous behaviors are detected using CloudTrail event filtering.
  2. Use Amazon DynamoDB Streams to capture and send updates to AWS Lambda. Create a Lambda function to output records to Amazon Kinesis Data Streams. Analyze any anomalies with Amazon Kinesis Data Analytics. Send SNS notifications when anomalous behaviors are detected.
  3. Copy the DynamoDß tables into Apache Hive tables on Amazon EMR every hour and analyze them for anomalous behaviors. Send Amazon SNS notifications when anomalous behaviors are detected.
  4. Use event patterns in Amazon CloudWatch Events to capture DynamoDB API call events with an AWS Lambda function as a target to analyze behavior. Send SNS notifications when anomalous behaviors are detected.
99
Q

A company runs a web application in an on-premises data center in Paris. The application includes stateless web servers behind a load balancer, shared files in a NAS device, and a MySQL database server. The company plans to migrate the solution to AWS and has the following requirements:

· Provide optimum performance for customers.

· Implement elastic scalability for the web tier.

· Optimize the database server performance for read-heavy workloads.

· Reduce latency for users across Europe and the US.

· Design the new architecture with a 99.9% availability SLA.

Which solution should a Solutions Architect propose to meet these requirements while optimizing operational efficiency?

  1. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in two AWS Regions and two Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySOL database. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe. Configure EFS cross- Region replication.
  2. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe.
  3. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in two AWS Regions and three Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move the shared files to Amazon FSx with cross-Region synchronization. Configure Amazon CloudFront with the ALB as the origin and a price class that includes the US and Europe.
  4. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon DocumentDB table in front of a Multi-AZ Amazon Aurora MySOL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes all global locations.
A
  1. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in two AWS Regions and two Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySOL database. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe. Configure EFS cross- Region replication.
  2. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon ElastiCache cluster in front of a Multi-AZ Amazon Aurora MySQL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes the US and Europe.
  3. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in two AWS Regions and three Availability Zones in each Region. Configure an Amazon ElastiCache cluster in front of a global Amazon Aurora MySQL database. Move the shared files to Amazon FSx with cross-Region synchronization. Configure Amazon CloudFront with the ALB as the origin and a price class that includes the US and Europe.
  4. Use an Application Load Balancer (ALB) in front of an Auto Scaling group of Amazon EC2 instances in one AWS Region and three Availability Zones. Configure an Amazon DocumentDB table in front of a Multi-AZ Amazon Aurora MySOL DB cluster. Move the shared files to Amazon EFS. Configure Amazon CloudFront with the ALB as the origin and select a price class that includes all global locations.
100
Q

A company runs an eCommerce web application on a pair of Amazon EC2 instances behind an Application Load Balancer. The application stores data in an Amazon DynamoDB table. Traffic has been increasing with some major sales events and read and write traffic has slowed down considerably over the busiest periods.

Which option provides a scalable application architecture to handle peak traffic loads with the LEAST development effort?

  1. Use AWS Lambda for the web application. Increase the read and write capacity of DynamoDB.
  2. Use Auto Scaling groups for the web application and use DynamoDB auto scaling.
  3. Use Auto Scaling groups for the web application and use Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write to DynamoDB.
  4. Use AWS Lambda for the web application. Configure DynamoDß to use global tables.
A
  1. Use AWS Lambda for the web application. Increase the read and write capacity of DynamoDB.
  2. Use Auto Scaling groups for the web application and use DynamoDB auto scaling.
  3. Use Auto Scaling groups for the web application and use Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write to DynamoDB.
  4. Use AWS Lambda for the web application. Configure DynamoDß to use global tables.
101
Q

A Solutions Architect is designing a publicly accessible web application that runs from an Amazon S3 website endpoint. The S3 website is the origin for an Amazon CloudFront distribution. After deploying the solution the operations team ran some tests and received an “Error 403: Access Denied message” when attempting to connect.

What should the Solutions Architect check to determine the root cause of the issue? (Select TWO.)

  1. Check if object lock is enabled for the objects in the S3 bucket.
  2. Check if the storage class for objects in the S3 Standard.
  3. Check if the S3 bucket is encrypted using AWS KMS.
  4. Check the object versioning status of the objects in the S3 bucket
  5. Check if the S3 block public access option is enabled on the S3 bucket
A
  1. Check if object lock is enabled for the objects in the S3 bucket.
  2. Check if the storage class for objects in the S3 Standard.
  3. Check if the S3 bucket is encrypted using AWS KMS.
  4. Check the object versioning status of the objects in the S3 bucket
  5. Check if the S3 block public access option is enabled on the S3 bucket
102
Q

A secure web application runs in an Amazon VPC that has a public subnet and a private subnet. An Application Load Balancer is deployed into the public subnet. Each subnet has a separate Network ACL. The public subnet CIDR range is 10.1.0.0/24 and the private subnet CIDR range is 10.1.1.0/24. The web application is deployed on Amazon EC2 instances in the private subnet. Which combination of rules should be defined on the private subnet’s Network ACL to allow access from internet-based clients?

(Select TWO.)

  1. An outbound rule for ports 1024 through 65535 to destination 10.1.0.0/24.
  2. An inbound rule for port 443 from source 0.0.0.0/0.
  3. An inbound rule for port 443 from source 10.1.0.0/24.
  4. An outbound rule for port 443 to destination 10.1.0.0/24.
  5. An outbound rule for port 443 to destination 0.0.0.0/0.
A
  1. An outbound rule for ports 1024 through 65535 to destination 10.1.0.0/24.
  2. An inbound rule for port 443 from source 0.0.0.0/0.
  3. An inbound rule for port 443 from source 10.1.0.0/24.
  4. An outbound rule for port 443 to destination 10.1.0.0/24.
  5. An outbound rule for port 443 to destination 0.0.0.0/0.
103
Q

A web application is being deployed on Amazon EC2 instances and requires that users authenticate before they can access content. The solution needs to be configured so that it is highly available. Once authenticated, users should remain connected even if an underlying instance fails.

Which solution will meet these requirements?

  1. Create an Auto Scaling group for the EC2 instances and use an Application Load Balancer to direct incoming requests. Save the authenticated connection details on Amazon EBS volumes.
  2. Create an Auto Scaling group for the EC2 instances and use an Application Load Balancer to direct incoming requests. Use AWS Secrets Manager to save the authenticated connection details.
  3. Create an Auto Scaling group for the EΠinstances and use an Application Load Balancer to direct incoming requests. Use Amazon DynamoDB to save the authenticated connection details.
  4. Use AWS Global Accelerator to forward requests to the EC2 instances. Use Amazon DynamoDB to save the authenticated connection details.
A
  1. Create an Auto Scaling group for the EC2 instances and use an Application Load Balancer to direct incoming requests. Save the authenticated connection details on Amazon EBS volumes.
  2. Create an Auto Scaling group for the EC2 instances and use an Application Load Balancer to direct incoming requests. Use AWS Secrets Manager to save the authenticated connection details.
  3. Create an Auto Scaling group for the EΠinstances and use an Application Load Balancer to direct incoming requests. Use Amazon DynamoDB to save the authenticated connection details.
  4. Use AWS Global Accelerator to forward requests to the EC2 instances. Use Amazon DynamoDB to save the authenticated connection details.
104
Q

A company runs a Java application on Amazon EC2 instances. The DevOps team uses a combination of Amazon CloudFormation and AWS OpsWorks to update the infrastructure and application stacks respectively. During recent updates the DevOps team reported service disruption issues that affected the Java application running on the Amazon EC2 instances.

Which solution will increase the reliability of application updates?

  1. Implement a blue/green deployment strategy.
  2. Implement CloudFormation Stack sets.
  3. Implement CloudFormation change sets.
  4. Implement the canary release strategy.
A
  1. Implement a blue/green deployment strategy.
  2. Implement CloudFormation Stack sets.
  3. Implement CloudFormation change sets.
  4. Implement the canary release strategy.
105
Q

An application runs on a fleet of Amazon ECS instances and stores data in an Amazon S3 bucket. Until recently the application had been working well and then started to fail to upload objects to the S3 bucket. Server access logging has been enabled and 403 errors have been identified since the time of the fault. The ECS cluster has been setup according to best practices and no changes have been made to the S3 bucket policy or IAM roles used to access the bucket.

What is the most LIKELY cause of the failure?

  1. The ECS task execution role was modified.
  2. The ECS container instance lAM role was modified.
  3. The ECS tasks have insufficient memory assigned.
  4. The ECS service is inaccessible.
A
  1. The ECS task execution role was modified.
  2. The ECS container instance lAM role was modified.
  3. The ECS tasks have insufficient memory assigned.
  4. The ECS service is inaccessible.
106
Q

A company has an application that generates data exports which are saved as CSV files in an Amazon S3 bucket. The data is generally confidential and only accessed by IAM users. An individual CSV file must be shared with an external organization. A Solutions Architect used an IAM user account to attempt to perform a PUT Object call to enable a public ACL on the object and it failed with “insufficient permissions”.

What is the most likely cause of this issue?

  1. The object has a policy assigned that blocks all public access.
  2. The bucket has the BlockPublicAcls selling set to TRUE.
  3. The object ACL does not allow write permissions for the lAM user account
  4. The bucket has the BlockPublicPolicy selling set to TRUE.
A
  1. The object has a policy assigned that blocks all public access.
  2. The bucket has the BlockPublicAcls selling set to TRUE.
  3. The object ACL does not allow write permissions for the lAM user account
  4. The bucket has the BlockPublicPolicy selling set to TRUE.
107
Q

An application publishes data continuously to Amazon DynamoDB using an AWS Lambda function. The DynamoDB table has an auto scaling policy enabled with the target utilization set to 70%. There are short predictable periods in which a large volume of data is received and this can exceed the typical load by up to 300%. The AWS Lambda function writes

ProvisionedThroughputExceededException messages to Amazon CloudWatch Logs during these times, and some records are redirected to the dead letter queue.

What change should the company make to resolve this issue?

  1. Use Application Auto Scaling to scale out write capacity on the DynamoDB table based on a schedule.
  2. Use Application Auto Scaling to set a step scaling policy to scale out write capacity on the DynamoDB table when load spikes reach a defined threshold.
  3. Use Amazon CloudWatch Events to monitor the dead letter queue and invoke a Lambda function to automatically retry failed records.
  4. Reduce the DynamoDB table auto scaling policy s target utilization to 50% to provide more resources for peak traffic periods.
A
  1. Use Application Auto Scaling to scale out write capacity on the DynamoDB table based on a schedule.
  2. Use Application Auto Scaling to set a step scaling policy to scale out write capacity on the DynamoDB table when load spikes reach a defined threshold.
  3. Use Amazon CloudWatch Events to monitor the dead letter queue and invoke a Lambda function to automatically retry failed records.
  4. Reduce the DynamoDB table auto scaling policy s target utilization to 50% to provide more resources for peak traffic periods.
108
Q

A company has a security policy that requires that all internal application connectivity must use private IP addresses. A Solutions Architect has created interface endpoints in private subnets to connect to AWS public services. The Solutions Architect tested the configuration and it failed due to the AWS service names being resolved to public IP addresses.

Which configuration change shouldite the Solutions Architect make to resolve the issue?

  1. Enable the private DNS option on the VPC attributes.
  2. Update the route table for the subnets with a route to the interface endpoint.
  3. Configure the security group on the interface endpoint to allow connectivity to the AWS services.
  4. Configure an Amazon Route 53 private hosted zone with a conditional forwarder for the internal application.
A
  1. Enable the private DNS option on the VPC attributes.
  2. Update the route table for the subnets with a route to the interface endpoint.
  3. Configure the security group on the interface endpoint to allow connectivity to the AWS services.
  4. Configure an Amazon Route 53 private hosted zone with a conditional forwarder for the internal application.
109
Q

An eCommerce website consists of a two-tier architecture. Amazon EC2 instances in an Auto Scaling group are used for the web server layer behind an Application Load Balancer (ALB). The web servers run a PHP application on Apache Tomcat. The database layer runs on an Aurora MySQL database instance.

Recently, a large sales event caused some errors to occur for customers when placing orders on the website. The operations team collected logs from the web servers and reviewed Aurora DB cluster performance metrics. Several web servers were terminated by the ASG before the logs could be collected and the Aurora metrics were not sufficient for query performance analysis.

Which combination of steps should a Solutions Architect take to improve application performance visibility during peak traffic events? (Select THREE.)

  1. Configure AWS CloudTrail to collect API activity from Amazon EC2 and Aurora and analyze with Amazon Athena.
  2. Configure the Aurora MYSQL DB cluster to generate error logs by setting parameters in the parameter group.
  3. Configure the Aurora MySQL DB cluster to generate slow query logs by setting parameters in the parameter group.
  4. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for PHP.
  5. Install and configure an Amazon CloudWatch Logs agent on the EQ instances to send the Apache logs to CloudWatch Logs.
  6. Configure an Amazon EventBridge rule that triggers Lambda upon Aurora error events and saves logs to Amazon S3 for analysis with Amazon Athena.
A
  1. Configure AWS CloudTrail to collect API activity from Amazon EC2 and Aurora and analyze with Amazon Athena.
  2. Configure the Aurora MYSQL DB cluster to generate error logs by setting parameters in the parameter group.
  3. Configure the Aurora MySQL DB cluster to generate slow query logs by setting parameters in the parameter group.
  4. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for PHP.
  5. Install and configure an Amazon CloudWatch Logs agent on the EQ instances to send the Apache logs to CloudWatch Logs.
  6. Configure an Amazon EventBridge rule that triggers Lambda upon Aurora error events and saves logs to Amazon S3 for analysis with Amazon Athena.
110
Q

A security team has discovered that developers have been storing IAM secret access keys in AWS CodeCommit repositories. The security team requires that measures are put in place to automatically find and remediate all instances of this vulnerability on an ongoing basis.

Which solution meets these requirements?

  1. Create an Amazon Macie job that scans AWS CodeCommit repositories for credentials. If any credentials are found an AWS Lambda function should be triggered that disables the credentials.
  2. Run a cron job on an Amazon EC2 instance to check the CodeCommit repositories for unsecured credentials. If any unsecured credentials are found, generate new credentials and store them in AWS KNIS.
  3. Use AWS Trusted Advisor to check for unsecured AWS credentials. If any unsecured credentials are found, use AWS Secrets Manager to rotate the credentials.
  4. Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. If any credentials are found, disable them and notify the user.
A
  1. Create an Amazon Macie job that scans AWS CodeCommit repositories for credentials. If any credentials are found an AWS Lambda function should be triggered that disables the credentials.
  2. Run a cron job on an Amazon EC2 instance to check the CodeCommit repositories for unsecured credentials. If any unsecured credentials are found, generate new credentials and store them in AWS KNIS.
  3. Use AWS Trusted Advisor to check for unsecured AWS credentials. If any unsecured credentials are found, use AWS Secrets Manager to rotate the credentials.
  4. Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. If any credentials are found, disable them and notify the user.
111
Q

A company has several Amazon RDS databases each with over 50 TB of data. Management has requested that ability to generate a weekly business report from the databases. The system should support ad-hoc SQL queries.

What is the MOST cost-effective solution for the Business Intelligence platform?

  1. Configure an AWS Glue crawler to crawl the databases and create tables in the AWS Glue Data Catalog. Create an AWS Glue ETL job that loads data from the RDS databases to Amazon S3. Use Amazon Athena to run the queries.
  2. Create an AWS Glue ETL job that copies data from the RDS databases to a single Amazon Aurora MySQL database. Run SQL queries on the Aurora MYSQL database.
  3. Create an Amazon Redshift cluster. Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster. Use Amazon Redshift to run the query.
  4. Create an Amazon EMR cluster. Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster. Use Amazon QuickSight to mn the query.
A
  1. Configure an AWS Glue crawler to crawl the databases and create tables in the AWS Glue Data Catalog. Create an AWS Glue ETL job that loads data from the RDS databases to Amazon S3. Use Amazon Athena to run the queries.
  2. Create an AWS Glue ETL job that copies data from the RDS databases to a single Amazon Aurora MySQL database. Run SQL queries on the Aurora MYSQL database.
  3. Create an Amazon Redshift cluster. Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster. Use Amazon Redshift to run the query.
  4. Create an Amazon EMR cluster. Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster. Use Amazon QuickSight to mn the query.
112
Q

An application runs across a fleet of Amazon EC2 instances in an Auto Scaling group. Application logs are collected from the EC2 instances using a cron job that is scheduled to run every 30 minutes. The cron job saves the log files to an Amazon S3 bucket. Failures and scaling events have caused some logs to be lost as the instances have been lost before the cron job collected the log files.

Which of the following options is the MOST reliable way of collecting and preserving the log files?

  1. Use the Amazon CloudWatch Logs agent to stream log messages directly to CloudWatch Logs. Configure the batch_count parameter to 1.
  2. Update the cron job to run every 5 minutes instead of every 30 minutes to reduce the possibility of log files being lost.
  3. Use Amazon CloudWatch Events to trigger Amazon Systems Manager Session Manager to run a batch script that collects the log files.
  4. Use Amazon CloudWatch Events to trigger an AWS Lambda function that collects the log files using an SSH connection.
A
  1. Use the Amazon CloudWatch Logs agent to stream log messages directly to CloudWatch Logs. Configure the batch_count parameter to 1.
  2. Update the cron job to run every 5 minutes instead of every 30 minutes to reduce the possibility of log files being lost.
  3. Use Amazon CloudWatch Events to trigger Amazon Systems Manager Session Manager to run a batch script that collects the log files.
  4. Use Amazon CloudWatch Events to trigger an AWS Lambda function that collects the log files using an SSH connection.
113
Q

An on-premises analytics database running on Oracle will be migrated to the cloud. The database runs on a single virtual machine (VM) and multiple client VMs running a Java-based web application that is used to perform SQL queries on the database. All virtual machines will be migrated to the cloud. The database uses 2 TB of storage and each client VM has a different configuration and saves stored procedures and query results in the local file system. There is a 10 Gbit AWS Direct Connect (DX) connection established and the application can be migrated over a scheduled 48-hour change window.

Which strategy will reduce the operational overhead on the database and have the LEAST impact on the operations staff after the migration?

  1. Use AWS SMS to replicate the database to AWS and create an Amazon EC2 instance. Migrate the Java-based web application to an AWS Elastic Beanstalk environment behind an Application Load Balancer (ALB).
  2. Use AWS DMS to migrate the database to Amazon RedShift. Migrate the Java-based web application to an AWS Elastic Beanstalk environment behind an Application Load Balancer (ALB).
  3. Use AWS SMS to replicate the database to AWS and create an Amazon EC2 instance. Replicate the client VMs into AWS using AWS SMS. Place the EC2 instances behind an Application Load Balancer (ALB).
  4. Use AWS DMS to migrate the database to Amazon RDS. Replicate the client VMs into AWS using AWS SMS. Create Route 53 Alias records for each client VM.
A
  1. Use AWS SMS to replicate the database to AWS and create an Amazon EC2 instance. Migrate the Java-based web application to an AWS Elastic Beanstalk environment behind an Application Load Balancer (ALB).
  2. Use AWS DMS to migrate the database to Amazon RedShift. Migrate the Java-based web application to an AWS Elastic Beanstalk environment behind an Application Load Balancer (ALB).
  3. Use AWS SMS to replicate the database to AWS and create an Amazon EC2 instance. Replicate the client VMs into AWS using AWS SMS. Place the EC2 instances behind an Application Load Balancer (ALB).
  4. Use AWS DMS to migrate the database to Amazon RDS. Replicate the client VMs into AWS using AWS SMS. Create Route 53 Alias records for each client VM.
114
Q

A manufacturing company collects data from IoT devices in JSON format. The data is collected, transformed, and stored in a data warehouse for analysis using an analytics tool that uses ODBC. The performance of the current solution suffers under high loads due to insufficient compute capacity and incoming data is often lost.

The application will be migrated to AWS. The solution must support the current analytics tool, resolve the compute constraints, and be cost-effective.

Which solution meets these requirements?

  1. Re-architect the application. Load the data into Amazon S3. Use AWS Lambda to transform the data. Create an Amazon DynamoDB global table across two Regions to store the data and use Amazon Elasticsearch to query the data.
  2. Re-architect the application. Load the data into Amazon S3. Use AWS Glue to transform the data. Store the table schema in an AWS Glue Data Catalog. Use Amazon Athena to query the data.
  3. Replatform the application. Use Amazon API Gateway for data ingestion. Use AWS Lambda to transform the JSON data. Create an Amazon Aurora PostgreSQL DB cluster with an Aurora Replica in another Availability Zone. Use Amazon QuickSight to generate reports and visualize data.
  4. Re-architect the application. Load the data into Amazon S3. Use Amazon Kinesis Data Analytics to transform the data. Create an external schema in an AWS Glue Data Catalog. Use Amazon Redshift Spectrum to query the data.
A
  1. Re-architect the application. Load the data into Amazon S3. Use AWS Lambda to transform the data. Create an Amazon DynamoDB global table across two Regions to store the data and use Amazon Elasticsearch to query the data.
  2. Re-architect the application. Load the data into Amazon S3. Use AWS Glue to transform the data. Store the table schema in an AWS Glue Data Catalog. Use Amazon Athena to query the data.
  3. Replatform the application. Use Amazon API Gateway for data ingestion. Use AWS Lambda to transform the JSON data. Create an Amazon Aurora PostgreSQL DB cluster with an Aurora Replica in another Availability Zone. Use Amazon QuickSight to generate reports and visualize data.
  4. Re-architect the application. Load the data into Amazon S3. Use Amazon Kinesis Data Analytics to transform the data. Create an external schema in an AWS Glue Data Catalog. Use Amazon Redshift Spectrum to query the data.
115
Q

A serverless application uses an AWS Lambda function behind and Amazon API Gateway REST API. During busy periods thousands of simultaneous invocations are required and requests fail multiple times before succeeding. The operations team has checked for AWS Lambda errors and did not find any. A Solutions Architect must investigate the root cause of the issue. What is the most likely cause of this problem?

  1. The throttle limit on the REST API is configured too low. During busy periods some requests are being throttled and are not reaching the Lambda function.
  2. The Lambda is configured with too little memory causing the function to fail at peak load.
  3. The Lambda function is set to use synchronous invocation and the REST API is calling the function using asynchronous invocation.
  4. The API is using the non-proxy integration with Lambda when it should be using proxy integration.
A
  1. The throttle limit on the REST API is configured too low. During busy periods some requests are being throttled and are not reaching the Lambda function.
  2. The Lambda is configured with too little memory causing the function to fail at peak load.
  3. The Lambda function is set to use synchronous invocation and the REST API is calling the function using asynchronous invocation.
  4. The API is using the non-proxy integration with Lambda when it should be using proxy integration.
116
Q

A company is planning to migrate a containerized application to Amazon ECS. The company wishes to reduce instance costs as much as possible whilst reducing the probability of service interruptions. How should a Solutions Architect configure the solution?

  1. Use Amazon ECS Reserved instances and configure termination protection.
  2. Use Amazon ECS with Application Auto Scaling and suspend dynamic scaling.
  3. Use Amazon ECS Spot instances and configure Spot Instance Draining.
  4. Use Amazon ECS Cluster Auto Scaling (CAS) and configure a warm-up time.
A
  1. Use Amazon ECS Reserved instances and configure termination protection.
  2. Use Amazon ECS with Application Auto Scaling and suspend dynamic scaling.
  3. Use Amazon ECS Spot instances and configure Spot Instance Draining.
  4. Use Amazon ECS Cluster Auto Scaling (CAS) and configure a warm-up time.
117
Q

A Solutions Architect wants to make sure that only IAM users with appropriate permissions can access a new Amazon API Gateway endpoint. How can the Solutions Architect design the API Gateway access control to meet this requirement?

  1. Set the authorization to AWS_IAM for the API Gateway method. Create a permissions policy that grants lambda:lnvokeFunction permission on the REST API resource and attach it to a group containing the lAM user accounts.
  2. Create a client certificate for API Gateway. Distribute the certificate to the AWS users that need to access the endpoint. Enable the API caller to pass the client certificate when accessing the endpoint.
  3. Set the authorization to AWS_IAM for the API Gateway method. Create a permissions policy that grants execute-api:lnvoke permission on the REST API resource and attach it to a group containing the lAM user accounts.
  4. Create an AWS Lambda function as a custom authorizer and ask the API client to pass the key and secret when making the call, and then use Lambda to validate the key/secret pair against the lAM system.
A
  1. Set the authorization to AWS_IAM for the API Gateway method. Create a permissions policy that grants lambda:lnvokeFunction permission on the REST API resource and attach it to a group containing the lAM user accounts.
  2. Create a client certificate for API Gateway. Distribute the certificate to the AWS users that need to access the endpoint. Enable the API caller to pass the client certificate when accessing the endpoint.
  3. Set the authorization to AWS_IAM for the API Gateway method. Create a permissions policy that grants execute-api:lnvoke permission on the REST API resource and attach it to a group containing the lAM user accounts.
  4. Create an AWS Lambda function as a custom authorizer and ask the API client to pass the key and secret when making the call, and then use Lambda to validate the key/secret pair against the lAM system.
118
Q

A company needs to deploy an application into an AWS Region across multiple Availability Zones and has several requirements for the deployment. The application requires access to 100 GB of static data before the application starts and must be able to scale up and down quickly. Startup time must be minimized as much as possible. The Operations team must be able to install critical OS patches within 48 hours of release. The solution should also be cost-effective.

Which deployment strategy meets these requirements?

  1. Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount a shared Amazon EBS volume with the static data to the EC2 instances at launch time.
  2. Use Amazon EC2 Auto Scaling with a standard AMI. Use a user data script to download the static data from an Amazon 53 bucket Update the OS patches with AWS Systems Manager.
  3. Use Amazon EC2 Auto Scaling with an AMI that includes the static data. Update the OS patches with AWS Systems Manager.
  4. Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount an Amazon EFS file system with the static data to the EQ instances at launch time.
A
  1. Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount a shared Amazon EBS volume with the static data to the EC2 instances at launch time.
  2. Use Amazon EC2 Auto Scaling with a standard AMI. Use a user data script to download the static data from an Amazon 53 bucket Update the OS patches with AWS Systems Manager.
  3. Use Amazon EC2 Auto Scaling with an AMI that includes the static data. Update the OS patches with AWS Systems Manager.
  4. Use Amazon EC2 Auto Scaling with an AMI that includes the latest OS patches. Mount an Amazon EFS file system with the static data to the EQ instances at launch time.
119
Q

A company stores highly confidential information in an Amazon S3 bucket. The security team have evaluated the security of the configuration and have come up with some new requirements that must be met. The security team now requires the ability to identify the IP addresses that make requests to the bucket to be able to identify malicious actors. They additionally require that any changes to the bucket policy are automatically remediated and alerts of these changes are sent to their team members.

Which strategies should a Solutions Architect use to meet these requirements?

  1. Use Amazon Macie with to identify the IP addresses in Amazon 53 requests. Use AWS Lambda with Macle to automatically remediate 53 bucket policy changes. Use Macle automatic alerting capabilities for alerts.
  2. Use Amazon CloudWatch Logs with the Amazon Athena connector to identify the IP addresses in Amazon 53 requests. Use CloudWatch Events mies with AWS Lambda to automatically remediate 53 bucket policy changes. Configure alerting with Amazon SNS.
  3. Identify the IP addresses in Amazon 53 requests with Amazon S3 access logs and Amazon Athena. Use AWS Config with Auto Remediation to remediate any changes to S3 bucket policies. Configure alerting with AWS Config and Amazon SNS.
  4. Create an AWS CloudTrail trail and log management events. Use CloudWatch Events rules with AWS Lambda to automatically remediate 53 bucket policy changes. Configure alerting with Amazon SNS.
A
  1. Use Amazon Macie with to identify the IP addresses in Amazon 53 requests. Use AWS Lambda with Macle to automatically remediate 53 bucket policy changes. Use Macle automatic alerting capabilities for alerts.
  2. Use Amazon CloudWatch Logs with the Amazon Athena connector to identify the IP addresses in Amazon 53 requests. Use CloudWatch Events mies with AWS Lambda to automatically remediate 53 bucket policy changes. Configure alerting with Amazon SNS.
  3. Identify the IP addresses in Amazon 53 requests with Amazon S3 access logs and Amazon Athena. Use AWS Config with Auto Remediation to remediate any changes to S3 bucket policies. Configure alerting with AWS Config and Amazon SNS.
  4. Create an AWS CloudTrail trail and log management events. Use CloudWatch Events rules with AWS Lambda to automatically remediate 53 bucket policy changes. Configure alerting with Amazon SNS.
120
Q

A company uses an AWS account with resources deployed in multiple Regions globally. Operations teams deploy and manage resources within each Region. Some Region-specific service quotas have been reached causing an inability for the local operations teams to deploy resources. A centralized cloud team is responsible for monitoring and updating service quotas. The cloud team needs to create an automated and operationally efficient solution to proactively monitor service quotas. Monitoring should occur every 15 minutes and send alerts when a team exceeds 80% utilization.

Which solution will meet these requirements?

  1. Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80% use AWS Budgets to send an alert to the cloud team.
  2. Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80% publish a message to an Amazon SNS topic to alert the cloud team.
  3. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the GetServiceQuota API. If any service utilization is above 80% publish a message to an Amazon SNS topic to alert the cloud team.
  4. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the ListServiceQuotas API. If any service utilization is above 80% publish a message to an Amazon SNS topic to alert the cloud team.
A
  1. Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80% use AWS Budgets to send an alert to the cloud team.
  2. Create an Amazon EventBridge rule that triggers an AWS Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80% publish a message to an Amazon SNS topic to alert the cloud team.
  3. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the GetServiceQuota API. If any service utilization is above 80% publish a message to an Amazon SNS topic to alert the cloud team.
  4. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the ListServiceQuotas API. If any service utilization is above 80% publish a message to an Amazon SNS topic to alert the cloud team.
121
Q

A company has a line of business (LOB) application that is used for storing sales data for an eCommerce platform. The data is unstructured and stored in an Oracle database running on a single Amazon EC2 instance. The application front end consists of six EC2 instances in three Availability Zones (AZs). Each week the application receives bursts of traffic and application performance suffers. A Solutions Architect must design a solution to address scalability and reliability. The solutions should also eliminate licensing costs.

Which set of steps should the Solutions Architect take?

  1. Create an Auto Scaling group for the front end with a combination of Reserved instances and Spot Instances to reduce costs. Convert the tables in the Oracle database into Amazon DynamoDB tables.
  2. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Migrate the Oracle database into a single Amazon RDS reserved DB instance.
  3. Use Spot Instances for the front end to reduce costs. Convert the tables in the Oracle database into Amazon DynamoDB tables.
  4. Create an Auto Scaling group for the front end with a combination of Reserved instances and Spot Instances to reduce costs. Migrate the Oracle database into an Amazon RDS multi-AZ deployment
A
  1. Create an Auto Scaling group for the front end with a combination of Reserved instances and Spot Instances to reduce costs. Convert the tables in the Oracle database into Amazon DynamoDB tables.
  2. Create an Auto Scaling group for the front end with a combination of On-Demand and Spot Instances to reduce costs. Migrate the Oracle database into a single Amazon RDS reserved DB instance.
  3. Use Spot Instances for the front end to reduce costs. Convert the tables in the Oracle database into Amazon DynamoDB tables.
  4. Create an Auto Scaling group for the front end with a combination of Reserved instances and Spot Instances to reduce costs. Migrate the Oracle database into an Amazon RDS multi-AZ deployment
122
Q

A fleet of EC2 instances generate a large quantity of data and store the data on an Amazon EFS file system. The EC2 instances also backup the data by uploading to an Amazon S3 bucket in another Region on a daily basis. Some S3 uploads have been failing and the storage costs have significantly increased.

The operations team has removed the failed uploads. How can a Solutions Architect configure the backup jobs to efficiently backup data to S3 while reducing storage costs?

  1. Use S3 Transfer Acceleration for the backup jobs. Create a lifecycle policy for the incomplete multipart uploads on the S3 bucket to prevent new failed uploads from accumulating.
  2. Use S3 Transfer Acceleration for the backup jobs. Use the Multi-Object Delete operation to remove the old uploads on a daily basis.
  3. Use multipart upload for the backup jobs. Create a lifecycle policy for the incomplete multipart uploads on the S3 bucket to prevent new failed uploads from accumulating.
  4. Use multipart upload for the backup jobs. Create a lifecycle policy that archives data to Amazon S3 Glacier on a daily basis.
A
  1. Use S3 Transfer Acceleration for the backup jobs. Create a lifecycle policy for the incomplete multipart uploads on the S3 bucket to prevent new failed uploads from accumulating.
  2. Use S3 Transfer Acceleration for the backup jobs. Use the Multi-Object Delete operation to remove the old uploads on a daily basis.
  3. Use multipart upload for the backup jobs. Create a lifecycle policy for the incomplete multipart uploads on the S3 bucket to prevent new failed uploads from accumulating.
  4. Use multipart upload for the backup jobs. Create a lifecycle policy that archives data to Amazon S3 Glacier on a daily basis.
123
Q

An application runs on Amazon EC2 instances in a private subnet within an Amazon VPC. The application stores files in a specific Amazon S3 bucket. The files should not traverse the internet and only the application instances should be granted access to save files to the S3 bucket. A gateway endpoint has been created for Amazon S3 and connected to the Amazon VPC.

What additional steps should a Solutions Architect take to meet the stated requirements?

  1. Attach an endpoint policy to the gateway endpoint that restricts access to the specific S3 bucket. Assign an lAM role to the EC2 instances and attach a policy to the S3 bucket that grants access only to this role.
  2. Attach a bucket policy to the S3 bucket that grants access to the EC2 instances only using the aws:Sourcelp condition. Update the VPC route table so only the application EC2 instances can access the VPC endpoint.
  3. Attach an endpoint policy to the gateway endpoint that restricts access to 53 in the current Region. Attach a bucket policy to the 53 bucket that grants access to ec2.amazonaws.com service only.
  4. Attach an endpoint policy to the gateway endpoint that restricts access to the specific S3 bucket Attach a bucket policy to the S3 bucket that grants access only to the VPC endpoint.
A
  1. Attach an endpoint policy to the gateway endpoint that restricts access to the specific S3 bucket. Assign an lAM role to the EC2 instances and attach a policy to the S3 bucket that grants access only to this role.
  2. Attach a bucket policy to the S3 bucket that grants access to the EC2 instances only using the aws:Sourcelp condition. Update the VPC route table so only the application EC2 instances can access the VPC endpoint.
  3. Attach an endpoint policy to the gateway endpoint that restricts access to 53 in the current Region. Attach a bucket policy to the 53 bucket that grants access to ec2.amazonaws.com service only.
  4. Attach an endpoint policy to the gateway endpoint that restricts access to the specific S3 bucket Attach a bucket policy to the S3 bucket that grants access only to the VPC endpoint.
124
Q

A development team created a service that uses an AWS Lambda function to store information in an Amazon RDS Database. The database credentials are stored in clear text in the Lambda function code. A Solutions Architect is advising the development team on how to better secure the service. Which of the following should the Solutions Architect recommend? (Select TWO.)

  1. Configure Lambda to use the stored database credentials in AWS KMS and enabled automatic key rotation.
  2. Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials.
  3. Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation.
  4. Store the Amazon RDS database credentials in AWS KMS using imported key material.
  5. Create encrypted database credentials in AWS Secrets Manager for the Amazon RDS database.
A
  1. Configure Lambda to use the stored database credentials in AWS KMS and enabled automatic key rotation.
  2. Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials.
  3. Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation.
  4. Store the Amazon RDS database credentials in AWS KMS using imported key material.
  5. Create encrypted database credentials in AWS Secrets Manager for the Amazon RDS database.
125
Q

A company runs a high performance computing (HPC) application in an on-premises data center. The solution consists of a 10-node cluster running Linux with high-speed inter-node connectivity. The company is planning to migrate the application to the AWS Cloud. A Solutions Architect needs to design the solution architecture on AWS to ensure optimum performance for the HPC cluster.

Which combination of steps will meet these requirements? (Select TWO.)

  1. Use Amazon EC2 instances that support Elastic Fabric Adapter (EFA).
  2. Deploy Amazon FC2 instances in a placement group.
  3. Use Amazon EC2 instances that support burstable performance.
  4. Deploy Amazon EC2 instances in an Auto Scaling group.
  5. Deploy instances across multiple Availability Zones.
A
  1. Use Amazon EC2 instances that support Elastic Fabric Adapter (EFA).
  2. Deploy Amazon FC2 instances in a placement group.
  3. Use Amazon EC2 instances that support burstable performance.
  4. Deploy Amazon EC2 instances in an Auto Scaling group.
  5. Deploy instances across multiple Availability Zones.
126
Q

A company is planning to launch a new web application on AWS using a fully serverless design. The website will be used by global customers and should be highly responsive and offer minimal latency. The design should be highly available and include baseline DDoS protections against spikes in traffic. The users will login in to the web application using social IdPs such as Google, and Amazon.

How can the design requirements be met?

  1. Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an Amazon CloudFront distribution with the S3 bucket as the origin. Use Amazon Cognito to provide user management authentication functions.
  2. Build an API using Docker containers running on AWS Fargate in multiple Regions behind Application Load Balancers. Use an Amazon Route 53 latency-based routing policy. Use Amazon Cognito to provide user management authentication functions.
  3. Build an API using Docker containers running on Amazon ECS behind an Amazon CloudFront distribution. Use AWS Secrets Manager to provide user management authentication functions.
  4. Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an AWS WAF Web ACL and attach it for DDoS attack mitigation. Use Amazon Cognito to provide user management authentication functions.
A
  1. Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an Amazon CloudFront distribution with the S3 bucket as the origin. Use Amazon Cognito to provide user management authentication functions.
  2. Build an API using Docker containers running on AWS Fargate in multiple Regions behind Application Load Balancers. Use an Amazon Route 53 latency-based routing policy. Use Amazon Cognito to provide user management authentication functions.
  3. Build an API using Docker containers running on Amazon ECS behind an Amazon CloudFront distribution. Use AWS Secrets Manager to provide user management authentication functions.
  4. Build an API with API Gateway and AWS Lambda, use Amazon S3 for hosting static web resources and create an AWS WAF Web ACL and attach it for DDoS attack mitigation. Use Amazon Cognito to provide user management authentication functions.
127
Q

A Solutions Architect is designing a highly available infrastructure for a popular mobile application that offers games and videos for mobile phone users. The application runs on Amazon EC2 instances behind an Application Load Balancer. The database layer consists of an Amazon RDS MySQL Multi-AZ instance. The entire application stack is deployed across us-east-2 and us-west-1. Amazon Route 53 is configured to route traffic to the two deployments using a latency-based routing policy.

A testing team blocked access to the Amazon RDS DB instance in us-east-2 to verify that users who are typically directed to that deployment would be directed to us-west-1. This did not occur and users close to us-east-2 were directed there and the application failed.

Which changes to the infrastructure should a Solutions Architect make to resolve this issue? (Select TWO.)

  1. Change to a failover routing policy in Amazon Route 53 and configure active-active failover. Write a custom health check that verifies successful access to the Application Load Balancers in each Region.
  2. Write a custom health check that verifies successful access to the database endpoints in each Region. Add the health check within the latency-based routing policy in Amazon Route 53.
  3. Set the value of Evaluate Target Health to Yes on the latency alias resources for both us-east-2 and us-west-1.
  4. Write a custom health check that queries the AWS Service Dashboard API to verify the Amazon ROS service is healthy in each Region.
  5. Set the value of Evaluate Target Health to Yes on the failover alias resources for both us-east-2 and us-west-1.
A
  1. Change to a failover routing policy in Amazon Route 53 and configure active-active failover. Write a custom health check that verifies successful access to the Application Load Balancers in each Region.
  2. Write a custom health check that verifies successful access to the database endpoints in each Region. Add the health check within the latency-based routing policy in Amazon Route 53.
  3. Set the value of Evaluate Target Health to Yes on the latency alias resources for both us-east-2 and us-west-1.
  4. Write a custom health check that queries the AWS Service Dashboard API to verify the Amazon ROS service is healthy in each Region.
  5. Set the value of Evaluate Target Health to Yes on the failover alias resources for both us-east-2 and us-west-1.
128
Q

A company is creating a secure data analytics solution. Data will be uploaded into an Amazon S3 bucket. The data will then be analyzed by applications running on an Amazon EMR cluster that is launched into a VPC in a private subnet. The environment must be fully isolated from the internet at all times. Data must be encrypted at rest using keys that are controlled and provided by the company.

Which combination of actions should a Solutions Architect take to meet these requirements? (Select TWO.)

  1. Configure the S3 bucket policy to permit access to the Amazon EMR cluster only.
  2. Configure the S3 bucket policy to permit access using an aws:sourceVpce condition to match the S3 endpoint ID.
  3. Configure the EMR cluster to use an AWS CloudHSM appliance for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3.
  4. Configure the EMR cluster to use an AWS KMS managed CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and an interface VPC endpoint for AWS KMS.
  5. Configure the EMR cluster to use an AWS KMS CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and a NAT gateway to access AWS KMS.
A
  1. Configure the S3 bucket policy to permit access to the Amazon EMR cluster only.
  2. Configure the S3 bucket policy to permit access using an aws:sourceVpce condition to match the S3 endpoint ID.
  3. Configure the EMR cluster to use an AWS CloudHSM appliance for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3.
  4. Configure the EMR cluster to use an AWS KMS managed CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and an interface VPC endpoint for AWS KMS.
  5. Configure the EMR cluster to use an AWS KMS CMK for at-rest encryption. Configure a gateway VPC endpoint for Amazon S3 and a NAT gateway to access AWS KMS.
129
Q

A company has recently established 15 Amazon VPCs within the us-east-1 AWS Region. The company has also established an AWS Direct Connect to the Region from their on-premises data center. The company requires full transitive peering between the VPCs and the on-premises data center.

Which combination of actions is required to implement these requirements with the LEAST complexity? (Select TWO.)

  1. Create an AWS Direct Connect (DX) gateway and attach the (DX) gateway to a transit gateway. Enable route propagation with BGP.
  2. Create lPSec VPN connections between the VPCs in a fully meshed topology. Configure the route tables in the VPCs to route traffic across the IPSec VPN connections.
  3. Create VPC peering connections between the VPCs in a fully meshed topology. Configure the route tables in the VPCs to route traffic across the peering connections.
  4. Create an AWS transit gateway and add attachments for all of the VPCs. Configure the route tables in the VPCs to send traffic to the transit gateway.
  5. Create an AWS Direct Connect (DX) gateway and associate the DX gateway with a VGW in each VPC. Enable route propagation with BGP.
A
  1. Create an AWS Direct Connect (DX) gateway and attach the (DX) gateway to a transit gateway. Enable route propagation with BGP.
  2. Create lPSec VPN connections between the VPCs in a fully meshed topology. Configure the route tables in the VPCs to route traffic across the IPSec VPN connections.
  3. Create VPC peering connections between the VPCs in a fully meshed topology. Configure the route tables in the VPCs to route traffic across the peering connections.
  4. Create an AWS transit gateway and add attachments for all of the VPCs. Configure the route tables in the VPCs to send traffic to the transit gateway.
  5. Create an AWS Direct Connect (DX) gateway and associate the DX gateway with a VGW in each VPC. Enable route propagation with BGP.
130
Q

A company is planning a move to the AWS Cloud and is creating an account strategy. There are various teams in the company and each team prefers to keep their resources isolated from other teams. The Finance team would like each team’s resource usage separated for billing purposes. The Security team will provide permissions to each team using the principle of least privilege.

Which account strategy will meet all of these requirements?

  1. Use AWS Organizations to create a management account and create each team’s account from the management account. Create a security account for cross-account access. Apply service control policies on each account and grant the security team cross-account access to all accounts. The Security team will create lAM policies to provide least privilege access.
  2. Use AWS Organizations to create a management account. Create groups in Active Directory and assign them to roles in AWS to grant federated access. Apply tags to the resources for each team and separate bills based on tags. Control access to resources through lAM granting the minimum required privileges.
  3. Create a separate AWS account for each team. Assign the security account as the management account and enable consolidated billing for all other accounts. Create a cross-account role for security to manage accounts.
  4. Create a new AWS account and use AWS CloudFormation to provide teams with the resources they require. Use cost allocation tags and a third-party billing solution to provide the Finance team with a breakdown of costs based on tags. Use lAM policies to control access to resources and grant the Security team full access.
A
  1. Use AWS Organizations to create a management account and create each team’s account from the management account. Create a security account for cross-account access. Apply service control policies on each account and grant the security team cross-account access to all accounts. The Security team will create lAM policies to provide least privilege access.
  2. Use AWS Organizations to create a management account. Create groups in Active Directory and assign them to roles in AWS to grant federated access. Apply tags to the resources for each team and separate bills based on tags. Control access to resources through lAM granting the minimum required privileges.
  3. Create a separate AWS account for each team. Assign the security account as the management account and enable consolidated billing for all other accounts. Create a cross-account role for security to manage accounts.
  4. Create a new AWS account and use AWS CloudFormation to provide teams with the resources they require. Use cost allocation tags and a third-party billing solution to provide the Finance team with a breakdown of costs based on tags. Use lAM policies to control access to resources and grant the Security team full access.
131
Q

A legacy application consists of a series of batch scripts that coordinate multiple application components. Each application component processes data within a few seconds before passing it on to the next component. The application has become complex and difficult to update. A Solutions Architect plans to migrate the application to the AWS Cloud. The application should be refactored into serverless microservices and be fully coordinated using cloud-native services.

Which approach meets these requirements most cost-effectively?

  1. Refactor the application onto AWS Lambda functions. Use AWS Step Functions to orchestrate the application.
  2. Refactor the application onto Docker containers running on Amazon ECS. Use Amazon SQS to decouple the application components.
  3. Refactor the application onto AWS Lambda functions. Use Amazon EventBridge to automate the application.
  4. Refactor the application onto Docker containers running on AWS Fargate. Use AWS Step Functions to orchestrate the application.
A
  1. Refactor the application onto AWS Lambda functions. Use AWS Step Functions to orchestrate the application.
  2. Refactor the application onto Docker containers running on Amazon ECS. Use Amazon SQS to decouple the application components.
  3. Refactor the application onto AWS Lambda functions. Use Amazon EventBridge to automate the application.
  4. Refactor the application onto Docker containers running on AWS Fargate. Use AWS Step Functions to orchestrate the application.
132
Q

A company needs to close a data center and must migrate data to AWS urgently. The data center has a 1 Gbps internet connection and a 500 Mbps AWS Direct Connect link. The company must transfer 25 TB of data from the data center to an Amazon S3 bucket.

What is the FASTEST method of transferring the data?

  1. Copy the data to an 80 TB AWS Snowball device.
  2. Upload the data to the S3 bucket using S3 Transfer Acceleration.
  3. Use AWS DataSync to migrate the data to S3.
  4. Use the AWS Direct Connect link to upload the data to S3.
A
  1. Copy the data to an 80 TB AWS Snowball device.
  2. Upload the data to the S3 bucket using S3 Transfer Acceleration.
  3. Use AWS DataSync to migrate the data to S3.
  4. Use the AWS Direct Connect link to upload the data to S3.
133
Q

An application is being tested for deployment in a Development account. The application consists of an Amazon API Gateway, Amazon EC2 instances behind an Elastic Load Balancer and an Amazon DynamoDB table. The Developers wish to grant a testing team access to deploy the application several times for performing a variety of acceptance tests but don’t want to grant broad permissions to each user. The Developers currently deploy the application using an AWS CloudFormation template and a role that has permission to the APIs for the included services.

How can a Solutions Architect meet the requirements for granting restricted access to the testing team so they can run their tests?

  1. Upload the AWS CloudFormation template to Amazon S3. Give users in the testing team permission to use CloudFormation and S3 APIs with conditions that restrict the permissions to the template and the resources it creates. Train users to launch the template from the Cloud Formation console.
  2. Create an AWS Service Catalog product from the environment template and add a launch constraint to the product with the existing role. Give users in the testing team permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
  3. Upload the AWS CloudFormation template to Amazon S3. Give users in the testing team permission to assume the Developers role and add a policy that restricts the permissions to the template and the resources it creates. Train users to launch the template from the Cloud Formation console.
  4. Create an AWS Service Catalog product from the environment template and add a stack set constraint to the product with the existing role. Give users in the testing team permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
A
  1. Upload the AWS CloudFormation template to Amazon S3. Give users in the testing team permission to use CloudFormation and S3 APIs with conditions that restrict the permissions to the template and the resources it creates. Train users to launch the template from the Cloud Formation console.
  2. Create an AWS Service Catalog product from the environment template and add a launch constraint to the product with the existing role. Give users in the testing team permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
  3. Upload the AWS CloudFormation template to Amazon S3. Give users in the testing team permission to assume the Developers role and add a policy that restricts the permissions to the template and the resources it creates. Train users to launch the template from the Cloud Formation console.
  4. Create an AWS Service Catalog product from the environment template and add a stack set constraint to the product with the existing role. Give users in the testing team permission to use AWS Service Catalog APIs only. Train users to launch the template from the AWS Service Catalog console.
134
Q

A Solutions Architect must design a solution for providing private connectivity from a company’s WAN network to multiple AWS Regions. The company has offices around the world and has its main data center in New York. The company has mandated that traffic must not traverse the public internet at any time. The solution must also be highly available.

How can the Solutions Architect meet these requirements?

  1. Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use Direct Connect or Gateway to access data in other AWS Regions.
  2. Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use an AWS transit VPC solution to access data in other AWS Regions.
  3. Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use inter-region VPC peering to access the data in other AWS Regions.
  4. Create an AWS Direct Connect connection from the New York data center to all AWS Regions the company uses. Configure the company WAN to send traffic via the New York data center and on to the respective DX connection to access AWS.
A
  1. Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use Direct Connect or Gateway to access data in other AWS Regions.
  2. Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use an AWS transit VPC solution to access data in other AWS Regions.
  3. Create two AWS Direct Connect connections from the New York data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use inter-region VPC peering to access the data in other AWS Regions.
  4. Create an AWS Direct Connect connection from the New York data center to all AWS Regions the company uses. Configure the company WAN to send traffic via the New York data center and on to the respective DX connection to access AWS.
135
Q

An eCommerce company runs an application that records product registration information. The application uses an Amazon S3 bucket for storing files and an Amazon DynamoDB table to store customer record data. The application software runs in us-west-1 and eu-central-1. The S3 bucket and DynamoDB table are in us-west-1. A Solutions Architect has been asked to implement protection from data corruption and the loss of connectivity to either Region.

Which solution meets these requirements?

  1. Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable versioning on the S3 bucket. Implement strict ACLs on the 53 bucket.
  2. Create a DynamoDB global table to replicate data between us west-i and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-i . Set up S3 cross-region replication from us-west-1 to eu-central-1.
  3. Create a DynamoDB global table to replicate data between us-west-i and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-1 . Enable versioning on the 53 bucket.
  4. Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table. Set up S3 cross-region replication from us-west-1 to eu-central-1. Set up MFA delete on the S3 bucket in us-west-1.
A
  1. Create a DynamoDB global table to replicate data between us-west-1 and eu-central-1. Enable versioning on the S3 bucket. Implement strict ACLs on the 53 bucket.
  2. Create a DynamoDB global table to replicate data between us west-i and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-i . Set up S3 cross-region replication from us-west-1 to eu-central-1.
  3. Create a DynamoDB global table to replicate data between us-west-i and eu-central-1. Enable continuous backup on the DynamoDB table in us-west-1 . Enable versioning on the 53 bucket.
  4. Create an AWS Lambda function triggered by Amazon CloudWatch Events to make regular backups of the DynamoDB table. Set up S3 cross-region replication from us-west-1 to eu-central-1. Set up MFA delete on the S3 bucket in us-west-1.
136
Q

The security department of a large company with several AWS accounts wishes to centralize the management of identities and AWS permissions. The design should also synchronize authentication credentials with the company’s existing on-premises identity management provider (IdP).

Which solution will meet the security department’s requirements?

  1. Deploy the required lAM users, groups, roles, and policies in every AWS account. Create an AWS Organization and federate the on-premises identity management provider and the AWS accounts.
  2. Create a SAML-based identity management provider in a central account and map lAM roles that provide the necessary permissions for users. Create a centralized AWS Lambda function that replicates the identities in the on premises IdP groups to the AWS accounts.
  3. Create an AWS Organization with a management account that defines the SCPs for member accounts. Create a SAM L-based identity management provider in each account and map users in the on-premises ldP groups to lAM roles.
  4. Create a SAML-based identity management provider in a central account and map lAM roles that provide the necessary permissions for users. Map users in the on-premises ldP groups to lAM roles. Use cross-account access to the other AWS accounts.
A
  1. Deploy the required lAM users, groups, roles, and policies in every AWS account. Create an AWS Organization and federate the on-premises identity management provider and the AWS accounts.
  2. Create a SAML-based identity management provider in a central account and map lAM roles that provide the necessary permissions for users. Create a centralized AWS Lambda function that replicates the identities in the on premises IdP groups to the AWS accounts.
  3. Create an AWS Organization with a management account that defines the SCPs for member accounts. Create a SAM L-based identity management provider in each account and map users in the on-premises ldP groups to lAM roles.
  4. Create a SAML-based identity management provider in a central account and map lAM roles that provide the necessary permissions for users. Map users in the on-premises ldP groups to lAM roles. Use cross-account access to the other AWS accounts.
137
Q

A company is planning to migrate 30 small applications to AWS. The applications run on a mixture of Node.js and Python across a cluster of virtual servers on-premises. The company must minimize costs and standardize on a single deployment methodology for all applications. The applications have various usage patterns but generally have a low number of concurrent users. The applications use an average usage of 1 GB of memory with up to 3 GB during peak processing periods which can last several hours.

What is the MOST cost effective solution for these requirements?

  1. Migrate the applications to separate AWS Elastic Beanstalk environments. Enable Auto Scaling to ensure there are sufficient resources during peak processing periods. Monitor each AWS Elastic Beanstalk deployment using CloudWatch alarms.
  2. Migrate the applications to Amazon EC2 instances in Auto Scaling groups. Create separate target groups for each application behind an Application Load Balancer and use host-based routing. Configure Auto Scaling to scale based on memory utilization and set the threshold to 75%.
  3. Migrate the applications to Docker containers on Amazon ECS. Create a separate ECS task and service for each application. Enable service Auto Scaling based on memory utilization and set the threshold to 75%. Monitor services and hosts by using Amazon Cloud Watch.
  4. Migrate the applications to run on AWS Lambda with a separate function for each application. Use AWS CloudTrail logs and Amazon Cloud Watch alarms to verify completion of important processes.
A
  1. Migrate the applications to separate AWS Elastic Beanstalk environments. Enable Auto Scaling to ensure there are sufficient resources during peak processing periods. Monitor each AWS Elastic Beanstalk deployment using CloudWatch alarms.
  2. Migrate the applications to Amazon EC2 instances in Auto Scaling groups. Create separate target groups for each application behind an Application Load Balancer and use host-based routing. Configure Auto Scaling to scale based on memory utilization and set the threshold to 75%.
  3. Migrate the applications to Docker containers on Amazon ECS. Create a separate ECS task and service for each application. Enable service Auto Scaling based on memory utilization and set the threshold to 75%. Monitor services and hosts by using Amazon Cloud Watch.
  4. Migrate the applications to run on AWS Lambda with a separate function for each application. Use AWS CloudTrail logs and Amazon Cloud Watch alarms to verify completion of important processes.
138
Q

A company uses AWS CodeCommit for source control and AWS CodePipeline for continuous integration. The pipeline has a build stage which uses an Amazon S3 bucket for artifacts. The company requires a new development pipeline for testing new features. The new pipeline should be isolated from the production pipeline and incorporate continuous testing for unit tests.

How can a Solutions Architect meet these requirements?

  1. Create a separate pipeline in CodePipeline and trigger execution using CodeCommit tags. Use Jenkins for running unit tests. Create a stage in the pipeline with S3 as the target for staging the artifacts with an S3 bucket in a separate testing account.
  2. Create a separate pipeline in CodePipeline and trigger execution using CodeCommit branches. Use AWS CodeBuild for running unit tests and staging the artifacts in an S3 bucket in a separate testing account.
  3. Create a separate CodeCommit repository for feature development and use it to trigger the pipeline. Use AWS Lambda for running unit tests. Use AWS CodeBuild to stage the artifacts within different S3 buckets in the same production account.
  4. Create a separate pipeline in CodePipeline and trigger execution using CodeCommit branches. Use AWS Lambda for running unit tests. Use AWS CodeDeploy to stage the artifacts within an S3 bucket in a separate testing account.
A
  1. Create a separate pipeline in CodePipeline and trigger execution using CodeCommit tags. Use Jenkins for running unit tests. Create a stage in the pipeline with S3 as the target for staging the artifacts with an S3 bucket in a separate testing account.
  2. Create a separate pipeline in CodePipeline and trigger execution using CodeCommit branches. Use AWS CodeBuild for running unit tests and staging the artifacts in an S3 bucket in a separate testing account.
  3. Create a separate CodeCommit repository for feature development and use it to trigger the pipeline. Use AWS Lambda for running unit tests. Use AWS CodeBuild to stage the artifacts within different S3 buckets in the same production account.
  4. Create a separate pipeline in CodePipeline and trigger execution using CodeCommit branches. Use AWS Lambda for running unit tests. Use AWS CodeDeploy to stage the artifacts within an S3 bucket in a separate testing account.
139
Q

A company has connected their on-premises data center to AWS using a single AWS Direct Connect (DX) connection using a private virtual interface. The company is hosting the front end for a business-critical application in an Amazon VPC. The back end is hosted on-premises and the company requires consistent, reliable, and redundant connectivity between the front end and back end of the application.

Which design would provide the MOST resilient connectivity between AWS and the on-premises data center?

  1. Create an AWS Managed VPN connection that uses the public internet and attach it to the same virtual private gateway as the DX connection.
  2. Use multiple IPSec VPN connections to separate virtual private gateways and configure BGP to prioritize the DX connection.
  3. Install a second DX connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection.
  4. Add an additional physical connection for the existing DX connection using the same network carrier and join the connections to a link aggregation group (LAG) on the same private virtual interface.
A
  1. Create an AWS Managed VPN connection that uses the public internet and attach it to the same virtual private gateway as the DX connection.
  2. Use multiple IPSec VPN connections to separate virtual private gateways and configure BGP to prioritize the DX connection.
  3. Install a second DX connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection.
  4. Add an additional physical connection for the existing DX connection using the same network carrier and join the connections to a link aggregation group (LAG) on the same private virtual interface.
140
Q

A company is reviewing its CI/CD practices for updating a critical web application that runs on Amazon ECS. The application manager requires that deployments happen as quickly as possible with a minimum of downtime. In the case of errors there must be an ability to quickly roll back. The company currently uses AWS CodeCommit to host the application source code and has configured an AWS CodeBuild project to build the application. The company also plans to use AWS CodePipeline to trigger builds from CodeCommit commits using the existing CodeBuild project.

What changes should be made to the CI/CD configuration to meet these requirements?

  1. Create a pipeline in CodePipeline with a deploy stage that uses AWS DpsWorks and in-place deployments. Monitor the application and if there are any issues pushing another code update.
  2. Create a pipeline in CodePipeline with a deploy stage that uses a blue/green deployment strategy. Monitor the application and if there are any issues trigger a manual rollback using CodeDeploy.
  3. Create a pipeline in CodePipeline with a deploy stage that uses AWS CloudFormation to create test and production stacks. Monitor the application and if there are any issues push another code update.
  4. Create a pipeline in CodePipeline with a deploy stage that uses an in-place deployment strategy. Monitor the application and if there are any issues push another code update.
A
  1. Create a pipeline in CodePipeline with a deploy stage that uses AWS DpsWorks and in-place deployments. Monitor the application and if there are any issues pushing another code update.
  2. Create a pipeline in CodePipeline with a deploy stage that uses a blue/green deployment strategy. Monitor the application and if there are any issues trigger a manual rollback using CodeDeploy.
  3. Create a pipeline in CodePipeline with a deploy stage that uses AWS CloudFormation to create test and production stacks. Monitor the application and if there are any issues push another code update.
  4. Create a pipeline in CodePipeline with a deploy stage that uses an in-place deployment strategy. Monitor the application and if there are any issues push another code update.
141
Q

A company has launched a web application on Amazon EC2 instances. The instances have been launched in a private subnet. An Application Load Balancer (ALB) is configured in front of the instances. The instances are assigned to a security group named WebAppSG and the ALB is assigned to a security group named ALB-SG. The security team requires that the security group rules are locked down according to best practice.

What rules should be configured in the security groups? (Select TWO.)

  1. An outbound rule in WebAppSG allows ports 1024-65535 to destination ALB-SG.
  2. An inbound rule in ALB-SG allowing port 80 from WebAppSG.
  3. An outbound rule in ALB-SG allowing ports 1024-65535 to destination 0.0.0.0/0.
  4. An inbound rule in ALB-SG allowing port 80 from source 0.0.0.0/0.
  5. An inbound rule in WebAppSG allowing port 80 from source ALB-SG.
A
  1. An outbound rule in WebAppSG allows ports 1024-65535 to destination ALB-SG.
  2. An inbound rule in ALB-SG allowing port 80 from WebAppSG.
  3. An outbound rule in ALB-SG allowing ports 1024-65535 to destination 0.0.0.0/0.
  4. An inbound rule in ALB-SG allowing port 80 from source 0.0.0.0/0.
  5. An inbound rule in WebAppSG allowing port 80 from source ALB-SG.
142
Q

A company runs an application on Amazon EC2 instances in an Amazon VPC and must access an external security analytics service that runs on an HTTPS REST API. The provider of the external API service can only grant access to a single source public IP address per customer.

Which configuration can be used to enable access to the API service using a single IP address without making modifications to the company’s application?

  1. Launch the Amazon EC2 instances in a public subnet with an internet gateway. Associate an Elastic IP address to the internet gateway that can be whitelisted on the external API service.
  2. Launch the Amazon EC2 instances in a private subnet. Configure HTTP_PROXY application parameters to send outbound connections to an EC2 proxy server in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service.
  3. Launch the Amazon EC? instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted or on the external API service.
  4. Launch the Amazon EC2 instances in a public subnet. Set the HTTPS_PROXY and NO_PROXY application parameters to send non-VPC outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service.
A
  1. Launch the Amazon EC2 instances in a public subnet with an internet gateway. Associate an Elastic IP address to the internet gateway that can be whitelisted on the external API service.
  2. Launch the Amazon EC2 instances in a private subnet. Configure HTTP_PROXY application parameters to send outbound connections to an EC2 proxy server in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service.
  3. Launch the Amazon EC? instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an Elastic IP address to the NAT gateway that can be whitelisted or on the external API service.
  4. Launch the Amazon EC2 instances in a public subnet. Set the HTTPS_PROXY and NO_PROXY application parameters to send non-VPC outbound HTTPS connections to an EC2 proxy server deployed in a public subnet. Associate an Elastic IP address to the EC2 proxy host that can be whitelisted on the external API service.
143
Q

An eCommerce company runs a workload on AWS that includes a web and application tier running on Amazon EC2 and a database tier running on Amazon RDS MySQL. The business requires a cost-efficient disaster recovery solution for the application with an RTO of 5 minutes and an RPO of 1 hour. The solution should ensure the primary and DR sites have a minimum distance of 150 miles between them.

Which of the following options could a Solutions Architect recommend to meet the company’s disaster recovery requirements?

  1. Deploy a multi-Region solution ensuring the minimum distance requirements are met. The DR environment should be configured using the pilot light strategy with database replication to a standby database with a minimum of capacity. Use AWS CloudFormation to launch web servers, application servers and load balancers in case of a disaster. Use Amazon Route 53 to switch traffic to the DR Region and vertically scale the Amazon RDS DB instance.
  2. Deploy a scaled-down version of the production environment in a separate AWS Region ensuring the minimum distance requirements are met The DR environment should include one instance for the web tier and one instance for the application tier. Create another database instance and configure database mirroring for MySOL Configure Auto Scaling for the web and app tiers so they can scale based on load. Use Amazon Route 53 to switch traffic to the DR Region.
  3. Deploy a multi-Region solution ensuring the minimum distance requirements are met. Take regular snapshots of the web and application EC2 instances and replicate them to the DR Region. Launch an Amazon cross-Region read replica in the DR Region that can be promoted. Use AWS Cloud Formation to launch the web and application tiers in the event of a disaster. Fail over the RDS DB to the standby instance and use Amazon Route 53 to switch traffic to the DR Region.
  4. Deploy a multi-Region solution ensuring the minimum distance requirements are met. The DR environment should include fully-functional web, application, and database tiers in both regions with equivalent capacity. Activate the primary database in one region only and the standby database in the other region. Use Amazon Route 53 to automatically switch traffic from one region to another using health check routing policies.
A
  1. Deploy a multi-Region solution ensuring the minimum distance requirements are met. The DR environment should be configured using the pilot light strategy with database replication to a standby database with a minimum of capacity. Use AWS CloudFormation to launch web servers, application servers and load balancers in case of a disaster. Use Amazon Route 53 to switch traffic to the DR Region and vertically scale the Amazon RDS DB instance.
  2. Deploy a scaled-down version of the production environment in a separate AWS Region ensuring the minimum distance requirements are met The DR environment should include one instance for the web tier and one instance for the application tier. Create another database instance and configure database mirroring for MySOL Configure Auto Scaling for the web and app tiers so they can scale based on load. Use Amazon Route 53 to switch traffic to the DR Region.
  3. Deploy a multi-Region solution ensuring the minimum distance requirements are met. Take regular snapshots of the web and application EC2 instances and replicate them to the DR Region. Launch an Amazon cross-Region read replica in the DR Region that can be promoted. Use AWS Cloud Formation to launch the web and application tiers in the event of a disaster. Fail over the RDS DB to the standby instance and use Amazon Route 53 to switch traffic to the DR Region.
  4. Deploy a multi-Region solution ensuring the minimum distance requirements are met. The DR environment should include fully-functional web, application, and database tiers in both regions with equivalent capacity. Activate the primary database in one region only and the standby database in the other region. Use Amazon Route 53 to automatically switch traffic from one region to another using health check routing policies.
144
Q

A new employee is joining a security team. The employee initially requires access to manage Amazon DynamoDB, Amazon RDS, and Amazon CloudWatch. All security team members are added to the security team IAM group that provides additional permissions to manage all other AWS services.

The team lead wants to limit the permissions the new employee has access to until the employee takes on additional responsibilities, and then be able to easily add permissions as required, eventually providing the same access as all other security team employees.

How can the team lead limit the permissions assigned to the new user account whilst minimizing complexity?

  1. Create an lAM account for the new employee in a dedicated account. Use cross-account access to manage resources. Limit the permissions on the cross-account access role to only allow management of Amazon DynamoDB, Amazon RDS, and Amazon CloudWatch. When the employee takes on new management responsibilities, add permissions to the cross-account access lAM role.
  2. Create an AM account for the new employee and add the account to the security team lAM group. Set a permissions boundary that grants access to manage Amazon DynamoDB, Amazon RDS, and Amazon Cloud Watch. When the employee or takes on new management responsibilities, add the additional services to the permissions boundary lAM policy.
  3. Create an lAM account for the new employee. Create a new lAM group for the employee and add a permissions policy that grants access to manage Amazon DynamoDß, Amazon RDS, and Amazon Cloud Watch. When the employee takes on new management responsibilities, add the additional services to the lAM policy.
  4. Create an lAM account for the new employee and add the account to the security team lAM group. Use a Service Control Policy (SCP) to limit the maximum available permissions to Amazon DynamoDB, Amazon RDS, and Amazon Cloud Watch. When the employee takes on new management responsibilities, remove the SCP.
A
  1. Create an lAM account for the new employee in a dedicated account. Use cross-account access to manage resources. Limit the permissions on the cross-account access role to only allow management of Amazon DynamoDB, Amazon RDS, and Amazon CloudWatch. When the employee takes on new management responsibilities, add permissions to the cross-account access lAM role.
  2. Create an AM account for the new employee and add the account to the security team lAM group. Set a permissions boundary that grants access to manage Amazon DynamoDB, Amazon RDS, and Amazon Cloud Watch. When the employee or takes on new management responsibilities, add the additional services to the permissions boundary lAM policy.
  3. Create an lAM account for the new employee. Create a new lAM group for the employee and add a permissions policy that grants access to manage Amazon DynamoDß, Amazon RDS, and Amazon Cloud Watch. When the employee takes on new management responsibilities, add the additional services to the lAM policy.
  4. Create an lAM account for the new employee and add the account to the security team lAM group. Use a Service Control Policy (SCP) to limit the maximum available permissions to Amazon DynamoDB, Amazon RDS, and Amazon Cloud Watch. When the employee takes on new management responsibilities, remove the SCP.
145
Q

A company is creating an account structure on AWS. There will be separate accounts for the production and testing environments. The Solutions Architect wishes to implement centralized control of security identities and permissions to access the environments.

Which solution is most appropriate for these requirements?

  1. Create an AWS Organization that includes the production and testing accounts. Create lAM user accounts in the production and testing accounts and implement service control policies (SCPs) to centrally control permissions.
  2. Create a separate AWS account for identities where lAM user accounts can be created. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
  3. Create a separate AWS account for identities where lAM user accounts can be created. Create roles with appropriate permissions in the identity account and delegate access to the production and testing accounts.
  4. Create all user accounts in the production account. Create roles for access in the production account and testing accounts. Grant cross-account access from the production account to the testing account.
A
  1. Create an AWS Organization that includes the production and testing accounts. Create lAM user accounts in the production and testing accounts and implement service control policies (SCPs) to centrally control permissions.
  2. Create a separate AWS account for identities where lAM user accounts can be created. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
  3. Create a separate AWS account for identities where lAM user accounts can be created. Create roles with appropriate permissions in the identity account and delegate access to the production and testing accounts.
  4. Create all user accounts in the production account. Create roles for access in the production account and testing accounts. Grant cross-account access from the production account to the testing account.
146
Q

A company has created a management account and added several member accounts in an AWS Organization. The security team wishes to restrict access to a specific set of AWS services in the existing member accounts.

How can this requirement be implemented MOST efficiently?

  1. Create an lAM policy in each account that denies access to the services. Associate the policy with an lAM group and add all lAM users to the group.
  2. Create an lAM role in each member account and attach a policy to the role that denies access to the specific set of services. Create user accounts in the management account and instruct users to assume the lAM role in each member account to gain access to services.
  3. Create a service control policy (SCP) that denies access to the specific set of services and apply the policy to the root of the organization.
  4. Add the member accounts to a single organizational unit (OU). Create a service control policy (SCP) that denies access to the specific set of services and attach it to the OU.
A
  1. Create an lAM policy in each account that denies access to the services. Associate the policy with an lAM group and add all lAM users to the group.
  2. Create an lAM role in each member account and attach a policy to the role that denies access to the specific set of services. Create user accounts in the management account and instruct users to assume the lAM role in each member account to gain access to services.
  3. Create a service control policy (SCP) that denies access to the specific set of services and apply the policy to the root of the organization.
  4. Add the member accounts to a single organizational unit (OU). Create a service control policy (SCP) that denies access to the specific set of services and attach it to the OU.
147
Q

A company is migrating an application into AWS. The application code has already been installed and tested on Amazon EC2. The database layer consists of a 25 TB MySQL database in the on-premises data center. There is a 50 Mbps internet connection and an IPSec VPN connection to the Amazon VPC. The company plans to go live on AWS within 2 weeks.

Which combination of actions will meet the migration schedule with the LEAST downtime? (Select THREE.)

  1. Use AWS SMS to import the on-premises database into AWS and then use AWS DMS to synchronize the database with an Amazon Aurora MySQL DB instance.
  2. Launch an RDS Aurora MySQL DB instance and load the database data from the Snowball export. Configure replication from the on-premises database to the RDS Aurora instance using the VPN.
  3. Export the data from the database using daiabase-native tools and import the data to AWS using AWS Snowball.
  4. Launch an AWS DMS instance and configure it with the on-premises and Aurora DB instance information. Replicate using AWS DMS over the VPN connection.
  5. When the RDS Aurora MySQL database is fully synchronized, change the DNS entry to point to the Aurora DB instance and stop replication.
  6. Launch an Amazon RDS Aurora MySQL DB instance and use AWS DMS to miarate the on-premises database.
A
  1. Use AWS SMS to import the on-premises database into AWS and then use AWS DMS to synchronize the database with an Amazon Aurora MySQL DB instance.
  2. Launch an RDS Aurora MySQL DB instance and load the database data from the Snowball export. Configure replication from the on-premises database to the RDS Aurora instance using the VPN.
  3. Export the data from the database using daiabase-native tools and import the data to AWS using AWS Snowball.
  4. Launch an AWS DMS instance and configure it with the on-premises and Aurora DB instance information. Replicate using AWS DMS over the VPN connection.
  5. When the RDS Aurora MySQL database is fully synchronized, change the DNS entry to point to the Aurora DB instance and stop replication.
  6. Launch an Amazon RDS Aurora MySQL DB instance and use AWS DMS to miarate the on-premises database.
148
Q

An application runs in us-east-1 and consists of Amazon EC2 instances behind an Application Load Balancer (ALB) and an Amazon RDS MySQL database. The company is creating a disaster recovery solution to a second AWS Region (us-west-1). A solution has been created for replicating AMIs across Regions and an ALB is provisioned in us-west-1. Amazon Route 53 failover routing is configured appropriately. A Solutions Architect must complete the solution by designing the disaster recovery processes for the storage layer. The RPO is 5 minutes and the RTO is 15 minutes. The solution must be fully automated.

Which set of actions would complete the disaster recovery solution?

  1. Use an AWS Lambda function to replicate Amazon RDS snapshots to us-west-1. Use an Amazon EventBridge to trigger an AWS Lambda function that creates a new RDS database from the replicated snapshot.
  2. Create a cross-Region read replica in us-west-1. Use Amazon EventBridge to trigger an AWS Lambda function that promotes the read replica to primary and updates the DNS endpoint or address for the database.
  3. Create a cron job that runs the mysqldump command to export the MySQL database to a file stored on Amazon S3. Use Amazon EventBridge to trigger an AWS Lambda function that imports the database export to a standby database in us-west-1.
  4. Deploy an Amazon RDS Multi Master database across both AWS Regions. Configure the EC2 instances in us-west-i to write to the local RDS writer endpoint.
A
  1. Use an AWS Lambda function to replicate Amazon RDS snapshots to us-west-1. Use an Amazon EventBridge to trigger an AWS Lambda function that creates a new RDS database from the replicated snapshot.
  2. Create a cross-Region read replica in us-west-1. Use Amazon EventBridge to trigger an AWS Lambda function that promotes the read replica to primary and updates the DNS endpoint or address for the database.
  3. Create a cron job that runs the mysqldump command to export the MySQL database to a file stored on Amazon S3. Use Amazon EventBridge to trigger an AWS Lambda function that imports the database export to a standby database in us-west-1.
  4. Deploy an Amazon RDS Multi Master database across both AWS Regions. Configure the EC2 instances in us-west-i to write to the local RDS writer endpoint.
149
Q

An application generates around 15 GB of statistical data each day and this is expected to increase over time. A Solutions Architect plans to store the data in Amazon S3 and use Amazon Athena to analyze the data. The data will be analyzed using data ranges.

Which combination of steps will ensure optimal performance as the data grows? (Select TWO.)

  1. Store each object in Amazon S3 with a key that uses a random string.
  2. Store the data in Amazon S3 using Apache Parquet or Apache ORC formats.
  3. Store the data in Amazon S3 compressed files less than 10 MB in size.
  4. Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date.
  5. Store the data in separate buckets for each date range.
A
  1. Store each object in Amazon S3 with a key that uses a random string.
  2. Store the data in Amazon S3 using Apache Parquet or Apache ORC formats.
  3. Store the data in Amazon S3 compressed files less than 10 MB in size.
  4. Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date.
  5. Store the data in separate buckets for each date range.
150
Q

An eCommerce application offers a membership program. Members of the program need to be able to download all files in a secured Amazon S3 bucket. The access should be restricted to members of the program and not available to anyone else. An Amazon CloudFront distribution has been created to deliver the content to users around the world.

What is the most efficient method a Solutions Architect should use to securely enable access to the files in the S3 bucket?

  1. Configure the application to generate a signed URL for authenticated users that provides time-limited access to the files.
  2. Configure the application to send Set-Cookie headers to the viewer and control access to the files using signed cookies.
  3. Use an Origin Access Identity (OAI) to control access to the S3 bucket to users of the CloudFront distribution only.
  4. Configure a behavior in CloudFront that forwards requests for the files to the S3 bucket based on a path pattern.
A
  1. Configure the application to generate a signed URL for authenticated users that provides time-limited access to the files.
  2. Configure the application to send Set-Cookie headers to the viewer and control access to the files using signed cookies.
  3. Use an Origin Access Identity (OAI) to control access to the S3 bucket to users of the CloudFront distribution only.
  4. Configure a behavior in CloudFront that forwards requests for the files to the S3 bucket based on a path pattern.