Practice Exam - 3 Flashcards
1. A company wants to run an application on AWS. The company plans to provision its application in Docker containers running in an Amazon ECS cluster. The application requires a MySQL database and the company plans to use Amazon RDS. What is the MOST cost-effective solution to meet these requirements?
- Creatine ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database using Reserved Instances.
- Create an ECS cluster using On-Demand Instances. Provision the database using On-Demand Instances.
- Create an ECS cluster using On-Demand Instances. Provision the database using Spot Instances.
- Create an ECS cluster using a fleet of Spot Instances with Spot Instance draining enabled. Provision the database using On-Demand Instances.
- Creatine ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database using Reserved Instances.
- Create an ECS cluster using On-Demand Instances. Provision the database using On-Demand Instances.
- Create an ECS cluster using On-Demand Instances. Provision the database using Spot Instances.
- Create an ECS cluster using a fleet of Spot Instances with Spot Instance draining enabled. Provision the database using On-Demand Instances.
A company has a requirement to store documents that will be accessed by a serverless application. The documents will be accessed frequently for the first 3 months, and rarely after that. The documents must be retained for 7 years. What is the MOST cost-effective solution to meet these requirements?
- Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
- Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
- Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
- Store the documents in an encrypted EBS volume and create a cron job to delete the documents after 7 years.
- Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
- Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then expire the documents from Amazon S3 Glacier that are more than 7 years old.
- Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years.
- Store the documents in an encrypted EBS volume and create a cron job to delete the documents after 7 years.
A global enterprise company is in the process of creating an infrastructure services platform for its users. The company has the following requirements:
· Centrally manage the creation of infrastructure services using a central AWS account.
· Distribute infrastructure services to multiple accounts in AWS Organizations.
· Follow the principle of least privilege to limit end users’ permissions for launching and managing applications.
Which combination of actions using AWS services will meet these requirements? (Select TWO.)
- Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the lAM users that require access to the S3 bucket policy.
- Allow lAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
- Grant lAM users AWSCIoudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organization’s SCP at the AWS account root user level to deny all services except AWS Cloud Formation and Amazon S3.
- Allow lAM users to have AWSServiceCatalog EndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
- Define the infrastructure services in AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.
- Define the infrastructure services in AWS CloudFormation templates. Add the templates to a central Amazon S3 bucket and add the lAM users that require access to the S3 bucket policy.
- Allow lAM users to have AWSServiceCatalogEndUserFullAccess permissions. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
- Grant lAM users AWSCIoudFormationFullAccess and AmazonS3ReadOnlyAccess permissions. Add an Organization’s SCP at the AWS account root user level to deny all services except AWS Cloud Formation and Amazon S3.
- Allow lAM users to have AWSServiceCatalog EndUserReadOnlyAccess permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints.
- Define the infrastructure services in AWS Cloud Formation templates. Upload each template as an AWS Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the AWS Organizations structure created for the company.
A database for an eCommerce website was deployed on an Amazon RDS for MySQL DB instance with General Purpose SSD storage. The database was running performantly for several weeks until a peak shopping period when customers experienced slow performance and timeouts. Amazon CloudWatch metrics indicate that reads and writes to the DB instance were experiencing long response times. Metrics show that CPU utilization is <50%, plenty of available memory, and sufficient free storage space. There is no evidence of database connectivity issues in the application server logs.
What could be the root cause of database performance issues?
- The increased load resulted in the maximum number of allowed connections to the database instance.
- A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
- The increased load caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
- A large number of reads and writes exhausted the network bandwidth available to the RDS for MySQL DB instances.
- The increased load resulted in the maximum number of allowed connections to the database instance.
- A large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase.
- The increased load caused the data in the tables to change frequently, requiring indexes to be rebuilt to optimize queries.
- A large number of reads and writes exhausted the network bandwidth available to the RDS for MySQL DB instances.
A company is using multiple AWS accounts. The company’s DNS records are stored in a private Amazon Route 53 hosted zone in the management account and their applications are running in a production account.
A Solutions Architect is attempting to deploy an application into the production account. The application must resolve a CNAME record set for an Amazon RDS endpoint. The CNAME record set was created in a private hosted zone in the management account.
The deployment failed to start and the Solutions Architect has discovered that the CNAME record is not resolvable on the application EC2 instance despite being correctly created in Route 53.
Which combination of steps should the Solutions Architect take to resolve this issue? (Select TWO.)
- Create a private hosted zone for the record set in the production account. Configure Route 53 replication between AWS accounts.
- Create an authorization to associate the private hosted zone in the management account with the new VPC in the production account.
- Associate a new VPC in the production account with a hosted zone in the management account. Delete the association authorization in the management account.
- Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone.
- Hardcode the DNS name and IP address of the RDS database instance into the /etc/resolv.conf file on the application server.
- Create a private hosted zone for the record set in the production account. Configure Route 53 replication between AWS accounts.
- Create an authorization to associate the private hosted zone in the management account with the new VPC in the production account.
- Associate a new VPC in the production account with a hosted zone in the management account. Delete the association authorization in the management account.
- Deploy the database on a separate EC2 instance in the new VPC. Create a record set for the instance’s private IP in the private hosted zone.
- Hardcode the DNS name and IP address of the RDS database instance into the /etc/resolv.conf file on the application server.
6. A new AWS Lambda function has been created to replicate objects that are received in an Amazon S3 bucket to several other S3 buckets in various AWS accounts. The Lambda function is triggered when an object creation event occurs in the main S3 bucket. A Solutions Architect is concerned that the function may impact other critical functions due to Lambda’s regional concurrency limit.
How can the solutions architect ensure the new Lambda function will not impact other critical Lambda functions?
- Ensure the new Lambda function implements an exponential backoff algorithm. Monitor existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.
- Configure Amazon S3 event notifications to publish events to an Amazon S3 queue in a different account. Create the Lambda function in the same account as the SQS queue and trigger the function when messages are published to the queue.
- Configure the reserved concurrency limit for the new Lamoda function. Monitor existing critical Lambda functions with (Correct) Amazon Cloud Watch alarms for the Throttles Lambda metric.
- Modify the execution timeout for the Lambda function to the maximum allowable value. Monitor existing critical Lambda functions with Amazon Cloud Watch alarms for the Throttles Lambda metric.
- Ensure the new Lambda function implements an exponential backoff algorithm. Monitor existing critical Lambda functions with Amazon CloudWatch alarms for the Throttles Lambda metric.
- Configure Amazon S3 event notifications to publish events to an Amazon S3 queue in a different account. Create the Lambda function in the same account as the SQS queue and trigger the function when messages are published to the queue.
- Configure the reserved concurrency limit for the new Lamoda function. Monitor existing critical Lambda functions with (Correct) Amazon Cloud Watch alarms for the Throttles Lambda metric.
- Modify the execution timeout for the Lambda function to the maximum allowable value. Monitor existing critical Lambda functions with Amazon Cloud Watch alarms for the Throttles Lambda metric.
A company has a mobile application that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application is write intensive and costs have recently increased significantly. The biggest increase in cost has been for the AWS Lambda functions. Application utilization is unpredictable but has been increasing steadily each month.
A Solutions Architect has noticed that the Lambda function execution time averages over 4 minutes. This is due to wait time for a high-latency network call to an on-premises MySQL database. A VPN is used to connect to the VPC.
How can the Solutions Architect reduce the cost of the current architecture?
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL
- Enable API caching on API Gateway to reduce the number of Lambda function invocations.
- Enable Auto Scaling in DynamoDB.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Enable local caching in the mobile application to reduce the Lambda function invocation calls.
- Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Cache the API Gateway results to Amazon CloudFront.
- Use Amazon EC2 Reserved Instances instead of Lambda.
- Enable Auto Scaling on EC2 and use Spot Instances during peak times.
- Enable DynamoDB Auto Scaling to manage target utilization.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
- Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.
- Enable DynamoDB Accelerator for frequently accessed records and enable the DynamoDB Auto Scaling feature.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL
- Enable API caching on API Gateway to reduce the number of Lambda function invocations.
- Enable Auto Scaling in DynamoDB.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Enable local caching in the mobile application to reduce the Lambda function invocation calls.
- Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Cache the API Gateway results to Amazon CloudFront.
- Use Amazon EC2 Reserved Instances instead of Lambda.
- Enable Auto Scaling on EC2 and use Spot Instances during peak times.
- Enable DynamoDB Auto Scaling to manage target utilization.
- Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
- Enable caching of the Amazon API Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations.
- Enable DynamoDB Accelerator for frequently accessed records and enable the DynamoDB Auto Scaling feature.
- Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL.
A company has deployed an application that uses an Amazon DynamoDB table and the user base has increased significantly. Users have reported poor response times during busy periods but no error pages have been generated. The application uses Amazon DynamoDB in read-only mode. The operations team has determined that the issue relates to ProvisionedThroughputExceeded exceptions in the application logs when doing Scan and read operations.
A Solutions Architect has been tasked with improving application performance. Which solutions will meet these requirements whilst MINIMIZING changes to the application? (Select TWO.)
- Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.
- Provision an Amazon ElastiCache for Redis cluster. The cluster should be provisioned with enough shards to handle the peak application load.
- Include error retries and exponential backoffs in the application code to handle throttling errors and reduce load during periods of high requests.
- Enable adaptive capacity for the DynamoDB table to minimize throttling due to throughput exceptions.
- Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage.
- Provision a DynamoDB Accelerator (DAX) cluster with the correct number and type of nodes. Tune the item and query cache configuration for an optimal user experience.
- Provision an Amazon ElastiCache for Redis cluster. The cluster should be provisioned with enough shards to handle the peak application load.
- Include error retries and exponential backoffs in the application code to handle throttling errors and reduce load during periods of high requests.
- Enable adaptive capacity for the DynamoDB table to minimize throttling due to throughput exceptions.
- Enable DynamoDB Auto Scaling to manage the throughput capacity as table traffic increases. Set the upper and lower limits to control costs and set a target utilization based on the peak usage.
A company requires that only the master account in AWS Organizations is able to purchase Amazon EC2 Reserved Instances. Current and future member accounts should be blocked from purchasing Reserved Instances.
Which solution will meet these requirements?
- Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP (Correct) to the root of the organization.
- Move all current member accounts to a new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP to the new OU.
- Create an OU for the master account and each member account. Move the accounts into their respective CUs. Apply an SCP to the master accounts’ OU with the Allow effect for the ec2:PurchaseReservedlnstancesOffering.
- Create an Amazon CloudWatch Events rule that triggers a Lambda function to terminate any Reserved Instances launched by member accounts.
- Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP (Correct) to the root of the organization.
- Move all current member accounts to a new OU. Create an SCP with the Deny effect on the ec2:PurchaseReservedlnstancesOffering action. Attach the SCP to the new OU.
- Create an OU for the master account and each member account. Move the accounts into their respective CUs. Apply an SCP to the master accounts’ OU with the Allow effect for the ec2:PurchaseReservedlnstancesOffering.
- Create an Amazon CloudWatch Events rule that triggers a Lambda function to terminate any Reserved Instances launched by member accounts.
A company has deployed a SAML 2.0 federated identity solution with their on-premises identity provider (IdP) to authenticate users’ access to the AWS environment. A Solutions Architect ran authentication tests through the federated identity web portal and access to the AWS environment was granted. When a test user attempts to authenticate through the federated identity web portal, they are not able to access the AWS environment.
Which items should the solutions architect check to ensure identity federation is properly configured? (Select THREE.)
- The lAM users permissions policy has allowed the sts:AssumeRoleWithSAML API action allowed in their permissions policy.
- The AWS STS service has the on-premises ldP configured as an event source for authentication requests.
- The lAM users are providing the time-based one-time password (TOTP) codes required for authenticated access.
- The lAM roles created for the federated users or federated groups trust policy have set the SAML provider as the principal.
- The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the lAM role, and the SAML assertion from ldP.
- The company’s ldP defines SAML assertions that properly map users or groups in the company to lAM roles with appropriate permissions.
- The lAM users permissions policy has allowed the sts:AssumeRoleWithSAML API action allowed in their permissions policy.
- The AWS STS service has the on-premises ldP configured as an event source for authentication requests.
- The lAM users are providing the time-based one-time password (TOTP) codes required for authenticated access.
- The lAM roles created for the federated users or federated groups trust policy have set the SAML provider as the principal.
- The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the lAM role, and the SAML assertion from ldP.
- The company’s ldP defines SAML assertions that properly map users or groups in the company to lAM roles with appropriate permissions.
A company is migrating its on-premises systems to AWS. The computers consist of a combination of Windows and Linux virtual machines and physical servers. The company wants to be able to identify dependencies between on-premises systems and group systems together into applications to build migration plans. The company also needs to understand the performance requirements for systems so they can be right-sized.
How can these requirements be met?
- Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Connector to collect data for one week.
- Extract system information from an on-premises configuration management database (CM DB). Import the data directly into the Application Discovery Service.
- Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time.
- Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Allow the Discovery Agent to collect data for a period of time.
- Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Connector to collect data for one week.
- Extract system information from an on-premises configuration management database (CM DB). Import the data directly into the Application Discovery Service.
- Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time.
- Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Allow the Discovery Agent to collect data for a period of time.
A Solutions Architect is developing a mechanism to gain security approval for Amazon EC2 images (AMIs) so that they can be used by developers. The AMIs must go through an automated assessment process (CVE assessment) and be marked as approved before developers can use them. The approved images must be scanned every 30 days to ensure compliance.
Which combination of steps should the Solutions Architect take to meet these requirements while following best practices? (Select TWO.)
- Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed AWS Config rule for continuous scanning on all EC2 instances and use AWS Systems Manager Automation documents for remediation.
- Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the approved AMIs.
- Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager OrI Automation document on all EC2 instances every 30 days.
- Use Amazon Inspector to mount the CVE assessment package on the EC2 instances launched from the approved AMIs.
- Use AWS GuardDuty to run the CVE assessment package on the EC2 instances launched from the approved AMIs.
- Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed AWS Config rule for continuous scanning on all EC2 instances and use AWS Systems Manager Automation documents for remediation.
- Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the approved AMIs.
- Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager OrI Automation document on all EC2 instances every 30 days.
- Use Amazon Inspector to mount the CVE assessment package on the EC2 instances launched from the approved AMIs.
- Use AWS GuardDuty to run the CVE assessment package on the EC2 instances launched from the approved AMIs.
A company is designing an application that will require cross-Region disaster recovery with an RTO of less than 5 minutes and an RPO of less than 1 minute. The application tier DR solution has already been designed and a Solutions Architect must design the data recovery solution for the MySQL database tier.
How should the database tier be configured to meet the data recovery requirements?
- Use an Amazon RDS for MySQL instance with a Multi-AZ deployment.
- Create an Amazon RDS instance in the active Region and use a MySOL standby database on an Amazon EC2 instance in the failover Region.
- Use an Amazon Aurora global database with the primary in the active Region and the secondary in the failover Region.
- Use an Amazon RDS for MySQL instance with a cross-Region read replica in the failover Region.
- Use an Amazon RDS for MySQL instance with a Multi-AZ deployment.
- Create an Amazon RDS instance in the active Region and use a MySOL standby database on an Amazon EC2 instance in the failover Region.
- Use an Amazon Aurora global database with the primary in the active Region and the secondary in the failover Region.
- Use an Amazon RDS for MySQL instance with a cross-Region read replica in the failover Region.
A company runs hundreds of applications across several data centers and office locations. The applications include Windows and Linux operating systems, physical installations as well as virtualized servers, and MySQL and Oracle databases. There is no central configuration management database (CMDB) and existing documentation is incomplete and outdated. A Solutions Architect needs to understand the current environment and estimate the cloud resource costs after the migration.
Which tools or services should the Solutions Architect use to plan the cloud migration (Select THREE.)
- AWS Cloud Adoption Readiness Tool (CART)
- AWS Migration Hub
- AWS Application Discovery Service
- AWS Config
- AWS CloudWatch Logs
- AWS Server Migration Service
- AWS Cloud Adoption Readiness Tool (CART)
- AWS Migration Hub
- AWS Application Discovery Service
- AWS Config
- AWS CloudWatch Logs
- AWS Server Migration Service
An eCommerce company is running a promotional campaign and expects a large volume of user sign-ups on a web page that collects user information and preferences. The website runs on Amazon EC2 instances and uses an Amazon RDS for PostgreSQL DB instances. The volume of traffic is expected to be high and may be unpredictable with several spikes in activity. The traffic will result in a large number of database writes.
A solutions architect needs to build a solution that does not change the underlying data model and ensures that submissions are not dropped before they are committed to the database.
Which solution meets these requirements?
- Create an Amazon ElastiCache for Memcached cluster in front of the existing database instance to increase write performance.
- Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.
- Create an Amazon SQS queue and decouple the application and database layers. Configure an AWS Lambda function to write items from the queue into the database.
- Use scheduled scaling to scale up the existing DB instance immediately before the event and then automatically scale down afterwards.
- Create an Amazon ElastiCache for Memcached cluster in front of the existing database instance to increase write performance.
- Migrate to Amazon DynamoDB and manage throughput capacity with automatic scaling.
- Create an Amazon SQS queue and decouple the application and database layers. Configure an AWS Lambda function to write items from the queue into the database.
- Use scheduled scaling to scale up the existing DB instance immediately before the event and then automatically scale down afterwards.
A financial services company receives a data feed from a credit card service provider. The feed consists of approximately 2,500 records that are sent every 10 minutes in plaintext and delivered over HTTPS to an encrypted S3 bucket. The data includes credit card data that must be automatically masked before sending the data to another S3 bucket for additional internal processing. There is also a requirement to remove and merge specific fields, and then transform the record into JSON format.
Which solutions will meet these requirements?
- Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SOS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
- Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETLjob to transform the entire record according to the (Correct) processing and transformation requirements. Define the output format as JSON. Once complete, have the ETLjob send the results to another S3 bucket for internal processing.
- Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETLjob to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
- Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate task.
- Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SOS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
- Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETLjob to transform the entire record according to the (Correct) processing and transformation requirements. Define the output format as JSON. Once complete, have the ETLjob send the results to another S3 bucket for internal processing.
- Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETLjob to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
- Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate
A solution is required for updating user metadata and will be initiated by a fleet of front-end web servers. The solution must be capable of scaling rapidly from hundreds to tens of thousands of jobs in less than a minute. The solution must be asynchronous and minimize costs.
Which solution should a Solutions Architect use to meet these requirements?
- Create an AWS CloudFormation stack that is updated by an AWS Lambda function. Configure the Lambda function to update the metadata.
- Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update the web application to initiate Step Functions for every job.
- Create an Amazon EC2 Auto Scaling group of EC2 instances that pull messages from an Amazon SQS queue and process the user metadata updates. Configure the web application to send jobs to the queue.
- Create an AWS Lambda function that will update user metadata. Create an Amazon SQS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
- Create an AWS CloudFormation stack that is updated by an AWS Lambda function. Configure the Lambda function to update the metadata.
- Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update the web application to initiate Step Functions for every job.
- Create an Amazon EC2 Auto Scaling group of EC2 instances that pull messages from an Amazon SQS queue and process the user metadata updates. Configure the web application to send jobs to the queue.
- Create an AWS Lambda function that will update user metadata. Create an Amazon SQS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
A company uses AWS Organizations. The company recently acquired a new business unit and invited the new unit’s existing account to the company’s organization. The organization uses a deny list SCP in the root of the organization and all accounts are members of a single OU named Production.
The administrators of the new business unit discovered that they are unable to access AWS Database Migration Service (DMS) to complete an in-progress migration.
Which option will temporarily allow administrators to access AWS DMS and complete the migration project?
- Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the organizations deny list SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS DMS are complete.
- Convert the organizations root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporarily apply an SCP to the organization root that allows AWS DMS actions for principals only in the new account.
- Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the new account to the Production OU when the migration project is complete.
- Remove the organization’s root SCPs that limit access to AWS DMS. Create an SCP that allows AWS DMS actions and apply the SCP to the Production OU.
- Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the organizations deny list SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS DMS are complete.
- Convert the organizations root SCPs from deny list SCPs to allow list SCPs to allow the required services only. Temporarily apply an SCP to the organization root that allows AWS DMS actions for principals only in the new account.
- Create a temporary OU named Staging for the new account. Apply an SCP to the Staging OU to allow AWS DMS actions. Move the new account to the Production OU when the migration project is complete.
- Remove the organization’s root SCPs that limit access to AWS DMS. Create an SCP that allows AWS DMS actions and apply the SCP to the Production OU.
A company is testing an application that collects data from sensors fitted to vehicles. The application collects usage statistics data every 4 minutes. The data is sent to Amazon API Gateway, it is then processed by an AWS Lambda function and the results are stored in an Amazon DynamoDB table.
As the sensors have been fitted to more vehicles, and as more metrics have been configured for collection, the Lambda function execution time has increased from a few seconds to over 2 minutes. There are also many TooManyRequestsException errors being generated by Lambda.
Which combination of changes will resolve these issues? (Select TWO.)
- Collect data in an Amazon SQS FIFO queue, which triggers a Lambda function to process each message.
- Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches.
- Increase the CPU units assigned to the Lambda functions.
- Use Amazon EC? instead of Lambda to process the data.
- Increase the memory available to the Lambda functions.
- Collect data in an Amazon SQS FIFO queue, which triggers a Lambda function to process each message.
- Stream the data into an Amazon Kinesis data stream from API Gateway and process the data in batches.
- Increase the CPU units assigned to the Lambda functions.
- Use Amazon EC? instead of Lambda to process the data.
- Increase the memory available to the Lambda functions.
A Solutions Architect is designing a web application that will serve static content in an Amazon S3 bucket and dynamic content hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The application will use Amazon CloudFront and the solution should require that the content is available through CloudFront only.
Which combination of steps should the Solutions Architect take to restrict direct content access to CloudFront? (Select THREE.)
- Create a CloudFront Origin Access Identity (CAl) and add it to the CloudFront distribution. Update the S3 bucket policy to (Correct) allow access to the CAl only.
- Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution.
- Configure CloudFront to add a custom header to requests that it sends to the origin.
- Configure the ALB to add a custom header to HTTP requests that are sent to the EC2 instances.
- Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.
- Configure an S3 bucket policy to allow access from the CloudFront IP addresses only.
- Create a CloudFront Origin Access Identity (CAl) and add it to the CloudFront distribution. Update the S3 bucket policy to (Correct) allow access to the CAl only.
- Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the CloudFront distribution.
- Configure CloudFront to add a custom header to requests that it sends to the origin.
- Configure the ALB to add a custom header to HTTP requests that are sent to the EC2 instances.
- Create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB.
- Configure an S3 bucket policy to allow access from the CloudFront IP addresses only.
A company runs a data processing application on-premises and plans to move it to the AWS Cloud. Files are uploaded by users to a web application which then stores the files on an NFS-based storage system and places a message on a queue. The files are then processed from the queue and the results are returned to the user (and stored in long-term storage). This process can take up to 30 minutes. The processing times vary significantly and can be much higher during business hours.
What is the MOST cost-effective migration recommendation?
- Create a queue using Amazon SQS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in an Amazon S3 bucket.
- Create a queue using Amazon MOE Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS.
- Create a queue using Amazon SOS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use Amazon EC2 instances in an EC? Auto Scaling group to pull (C Æd) requests from the queue and process the files. Scale the EC2 or instances based on the SOS queue length. Store the processed files in an Amazon S3 bucket
- Create a queue using Amazon MO. Run the web application on Amazon EC2 and configure it to publish to the new queue. Launch an Amazon EC2 instance from a preconfigured AMI to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS. Terminate the EC2 instance after the task is complete.
- Create a queue using Amazon SQS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in an Amazon S3 bucket.
- Create a queue using Amazon MOE Run the web application on Amazon EC2 and configure it to publish to the new queue. Use an AWS Lambda function to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS.
- Create a queue using Amazon SOS. Run the web application on Amazon EC2 and configure it to publish to the new queue. Use Amazon EC2 instances in an EC? Auto Scaling group to pull (C Æd) requests from the queue and process the files. Scale the EC2 or instances based on the SOS queue length. Store the processed files in an Amazon S3 bucket
- Create a queue using Amazon MO. Run the web application on Amazon EC2 and configure it to publish to the new queue. Launch an Amazon EC2 instance from a preconfigured AMI to poll the queue, pull requests, and process the files. Store the processed files in Amazon EFS. Terminate the EC2 instance after the task is complete.
A new application that provides fitness and training advice has become extremely popular with thousands of new users from around the world. The web application is hosted on a fleet of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The content consists of static media files and different resources must be loaded depending on the client operating system.
Users have reported increasing latency for loading web pages and Amazon CloudWatch is showing high utilization of the EC2 instances.
Which set actions should a solutions architect take to improve response times?
- Create a separate ALB for each client operating system. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different ALBs depending on the User-Agent HTTP header.
- Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User- Agent HTTP header.
- Move content to Amazon 53. Create an Amazon CloudFront distribution to serve content out of the 53 buckets Use the User-Agent H1IP header to load different content.
- Create separate Auto Scaling groups based on dient operating systems. Switch to a Network Load Balancer (NIB). Use the User-Agent HTTP header in the NIB to route to a different set of EC2 instances.
- Create a separate ALB for each client operating system. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different ALBs depending on the User-Agent HTTP header.
- Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User- Agent HTTP header.
- Move content to Amazon 53. Create an Amazon CloudFront distribution to serve content out of the 53 buckets Use the User-Agent H1IP header to load different content.
- Create separate Auto Scaling groups based on dient operating systems. Switch to a Network Load Balancer (NIB). Use the User-Agent HTTP header in the NIB to route to a different set of EC2 instances.
A company includes several business units that each use a separate AWS account and a parent company AWS account. The company requires a single AWS bill across all AWS accounts with costs broken out for each business unit. The company also requires that services and features be restricted in the business unit accounts and this must be governed centrally.
Which combination of steps should a Solutions Architect take to meet these requirements? (Select TWO.)
- Use permissions boundaries applied to each business unit’s AWS account to define the maximum permissions available for services and features.
- Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit’s AWS account to join the organization.
- Use AWS Organizations to create a separate organization for each AWS account with all features enabled. Then, create trust relationships between the AWS organizations.
- Enable consolidated billing in the parent accounts billing console and link the business unit AWS accounts.
- Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.
- Use permissions boundaries applied to each business unit’s AWS account to define the maximum permissions available for services and features.
- Use AWS Organizations to create a single organization in the parent account with all features enabled. Then, invite each business unit’s AWS account to join the organization.
- Use AWS Organizations to create a separate organization for each AWS account with all features enabled. Then, create trust relationships between the AWS organizations.
- Enable consolidated billing in the parent accounts billing console and link the business unit AWS accounts.
- Create an SCP that allows only approved services and features, then apply the policy to the business unit AWS accounts.
A company is migrating an order processing application to the AWS Cloud. The usage patterns vary significantly but the application must be available at all times. Orders must be processed immediately and in the order that they are received. Which actions should a Solutions Architect take to meet these requirements?
- Use Amazon SOS with ElFO to queue messages in the correct order. Use Spot Instances in multiple Availability Zones for processing.
- Use Amazon SNS with ElFO to send orders in the correct order. Use Spot Instances in multiple Availability Zones for processing.
- Use Amazon SQS with FIFO to queue messages in the correct order. Use Reserved Instances in multiple Availability Zones for processing.
- Use Amazon SNS with FlEO to send orders in the correct order. Use a single large Reserved Instance for processing.
- Use Amazon SOS with ElFO to queue messages in the correct order. Use Spot Instances in multiple Availability Zones for processing.
- Use Amazon SNS with ElFO to send orders in the correct order. Use Spot Instances in multiple Availability Zones for processing.
- Use Amazon SQS with FIFO to queue messages in the correct order. Use Reserved Instances in multiple Availability Zones for processing.
- Use Amazon SNS with FlEO to send orders in the correct order. Use a single large Reserved Instance for processing.