Timed Mode Set 1 Flashcards

1
Q

An accounting firm hosts a mix of Windows and Linux Amazon EC2 instances in its AWS account. The solutions architect has been tasked to conduct a monthly performance check on all production instances. There are more than 200 On-Demand EC2 instances running in their production environment and it is required to ensure that each instance has a logging feature that collects various system details such as memory usage, disk space, and other metrics. The system logs will be analyzed using AWS Analytics tools and the results will be stored in an S3 bucket.

Which of the following is the most efficient way to collect and analyze logs from the instances with minimal effort?

A

Set up and configure a unified CloudWatch Logs agent in each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.

Enable the Traffic Mirroring feature and install AWS CDK on each On-Demand EC2 instance. Create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Set up CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances.

Set up and install the AWS Systems Manager Agent (SSM Agent) on each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.

Set up and install AWS Inspector Agent on each On-Demand EC2 instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A private bank is hosting a secure web application that allows its agents to view highly sensitive information about the clients. The amount of traffic that the web app will receive is known and not expected to fluctuate. An SSL will be used as part of the application’s data security. The chief information security officer (CISO) is concerned about the security of the SSL private key. The CISO wants to ensure that the key cannot be accidentally or intentionally moved outside the corporate environment. The solutions architect is also concerned that the application logs might contain some sensitive information. The EBS volumes used to store the data are already encrypted. In this scenario, the application logs must be stored securely and durably so that they can only be decrypted by authorized employees.

Which of the following is the most suitable and highly available architecture that can meet all of the requirements?

A

Distribute traffic to a set of web servers using an Elastic Load Balancer. Use TCP load balancing for the load balancer and configure your web servers to retrieve the SSL private key from a private Amazon S3 bucket on boot. Use another private Amazon S3 bucket to store your web server logs using Amazon S3 server-side encryption.

Distribute traffic to a set of web servers using an Elastic Load Balancer. To secure the SSL private key, upload the key to the load balancer and configure the load balancer to offload the SSL traffic. Lastly, write your application logs to an instance store volume that has been encrypted using a randomly generated AES key.

Distribute traffic to a set of web servers using an Elastic Load Balancer that performs TCP load balancing. Use an AWS CloudHSM to perform the SSL transactions and deliver your application logs to a private Amazon S3 bucket using server-side encryption.

Distribute traffic to a set of web servers using an Elastic Load Balancer that performs TCP load balancing. Use CloudHSM deployed to two Availability Zones to perform the SSL transactions and deliver your application logs to a private Amazon S3 bucket using server-side encryption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company is hosting a multi-tier web application in AWS. It is composed of an Application Load Balancer and EC2 instances across three Availability Zones. During peak load, its stateless web servers operate at 95% utilization. The system is set up to use Reserved Instances to handle the steady-state load and On-Demand Instances to handle the peak load. Your manager instructed you to review the current architecture and do the necessary changes to improve the system.

Which of the following provides the most cost-effective architecture to allow the application to recover quickly in the event that an Availability Zone is unavailable during peak load?

A

Launch a Spot Fleet using a diversified allocation strategy, with Auto Scaling enabled on each AZ to handle the peak load instead of On-Demand instances. Retain the current setup for handling the steady state load.

Use a combination of Reserved and On-Demand instances on each AZ to handle both the steady state and peak load.

Launch an Auto Scaling group of Reserved instances on each AZ to handle the peak load. Retain the current setup for handling the steady state load.

Use a combination of Spot and On-Demand instances on each AZ to handle both the steady state and peak load.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company has created multiple accounts in AWS to support the rapid growth of its cloud services. The multiple accounts are used to separate their various departments such as finance, human resources, engineering, and many others. Each account is managed by a Systems Administrator which has root access for that specific account only. There is a requirement to centrally manage policies across multiple AWS accounts by allowing or denying particular AWS services for individual accounts, or for groups of accounts.

Which is the most suitable solution that you should implement with the LEAST amount of complexity?

A

Use AWS Organizations and Service Control Policies to control the list of AWS services that can be used by each member account.

Provide access to externally authenticated users via Identity Federation. Set up an IAM role to specify permissions for users from each department whose identity is federated from your organization or a third-party identity provider.

Connect all departments by setting up cross-account access to each of the AWS accounts of the company. Create and attach IAM policies to your resources based on their respective departments to control access.

Set up AWS Organizations and Organizational Units (OU) to connect all AWS accounts of each department. Create a custom IAM Policy to allow or deny the use of certain AWS services for each account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company has several IoT enabled devices and sells them to customers around the globe. Every 5 minutes, each IoT device sends back a data file that includes the device status and other information to an Amazon S3 bucket. Every midnight, a Python cron job runs from an Amazon EC2 instance to read and process each data file on the S3 bucket and loads the values on a designated Amazon RDS database. The cron job takes about 10 minutes to process a day’s worth of data. After each data file is processed, it is eventually deleted from the S3 bucket. The company wants to expedite the process and access the processed data on the Amazon RDS as soon as possible.

Which of the following actions would you implement to achieve this requirement with the LEAST amount of effort?

A

Increase the Amazon EC2 instance size and spawn more instances to speed up the processing of the data files. Set the Python script cron job schedule to a 1-minute interval to further improve the access time.

Convert the Python script cron job to an AWS Lambda function. Configure AWS CloudTrail to log data events of the Amazon S3 bucket. Set up an Amazon EventBridge rule to trigger the Lambda function whenever an upload event on the S3 bucket occurs.

Convert the Python script cron job to an AWS Lambda function. Configure the Amazon S3 bucket event notifications to trigger the Lambda function whenever an object is uploaded to the bucket.

Convert the Python script cron job to an AWS Lambda function. Create an Amazon EventBridge rule scheduled at 1-minute intervals and trigger the Lambda function. Create parallel CloudWatch rules that trigger the same Lambda function to further reduce the processing time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An enterprise runs its CMS application on an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer. The instances are placed on private subnets while the ALB is placed on public subnets. As part of best practices, the AWS Systems Manager Agent is installed on the instances and the AWS Systems Manager Session Manager is used to login into the instances. The EC2 instances send application logs to Amazon CloudWatch Logs.

Upon the deployment of the new application version, the new instances are being marked as unhealthy by the ALB and are being replaced by the Auto Scaling group. For troubleshooting, the solutions architect tries to log in on the unhealthy instances but the instances are getting terminated. The collected logs on CloudWatch Logs do not show definitive errors in the application.

Which of the following options is the quickest way for the solutions architect to troubleshoot the problem?

A

Go to the Auto Scaling Groups section in the AWS console and suspend the “Terminate” process for the ASG. Log in to one of the unhealthy instances using AWS Systems Manager Session Manager.

Create a temporary Amazon EC2 instance and deploy the new application version. Login to the EC2 instance using the SSH key. Add the instance to the Application Load Balancer to inspect application logs in real-time.

Select one of the new Amazon EC2 instances and enable EC2 instance termination protection. Gain access to the unhealthy instance using the AWS Systems Manager Session Manager.

Update the application log setting to have more verbose logging to capture more application logs. Ensure that the Amazon CloudWatch agent is installed and running on the instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A fintech startup has developed a cloud-based payment processing system that accepts credit card payments as well as cryptocurrencies such as Bitcoin, Ripple, and the likes. The system is deployed in AWS which uses EC2, DynamoDB, S3, and CloudFront to process the payments. Since they are accepting credit card information from the users, they are required to be compliant with the Payment Card Industry Data Security Standard (PCI DSS). On the recent 3rd-party audit, it was found that the credit card numbers are not properly encrypted and hence, their system failed the PCI DSS compliance test. You were hired by the fintech startup to solve this issue so they can release the product in the market as soon as possible. In addition, you also have to improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content.

In this scenario, what is the best option to protect and encrypt the sensitive credit card information of the users and to improve the cache hit ratio of your CloudFront distribution?

A

Configure the CloudFront distribution to use Signed URLs. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your cache hit ratio.

Add a custom SSL in the CloudFront distribution. Configure your origin to add User-Agent and Host headers to your objects to increase your cache hit ratio.

Configure the CloudFront distribution to enforce secure end-to-end connections to origin servers by using HTTPS and field-level encryption. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your cache hit ratio.

Create an origin access control (OAC) and add it to the CloudFront distribution. Configure your origin to add User-Agent and Host headers to your objects to increase your cache hit ratio.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An innovative Business Process Outsourcing (BPO) startup is planning to launch a scalable and cost-effective call center system using AWS. The system should be able to receive inbound calls from thousands of customers and generate user contact flows. Callers must have the capability to perform basic tasks such as changing their password or checking their balance without them having to speak to a call center agent. It should also have advanced deep learning functionalities such as automatic speech recognition (ASR) to achieve highly engaging user experiences and lifelike conversational interactions. A feature that allows the solution to query other business applications and send relevant data back to callers must also be implemented.

Which of the following is the MOST suitable solution that the Solutions Architect should implement?

A

Set up a cloud-based contact center using the Amazon Connect service. Create a conversational chatbot using Amazon Lex with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with Amazon Connect. Connect the solution to various business applications and other internal systems using AWS Lambda functions.

Set up a cloud-based contact center using the AWS Ground Station service. Create a conversational chatbot using Amazon Alexa for Business with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with AWS Ground Station. Connect the solution to various business applications and other internal systems using AWS Lambda functions.

Set up a cloud-based contact center using the AWS Elemental MediaConnect service. Create a conversational chatbot using Amazon Polly with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with AWS Elemental MediaConnect. Connect the solution to various business applications and other internal systems using AWS Lambda functions.

Set up a cloud-based contact center using the Amazon Connect service. Create a conversational chatbot using Amazon Comprehend with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with Amazon Connect. Connect the solution to various business applications and other internal systems using AWS Lambda functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company hosts its multi-tiered web application on a fleet of Auto Scaling EC2 instances spread across two Availability Zones. The Application Load Balancer is in the public subnets and the Amazon EC2 instances are in the private subnets. After a few weeks of operations, the users are reporting that the web application is not working properly. Upon testing, the Solutions Architect found that the website is accessible and the login is successful. However, when the “find a nearby store” function is clicked on the website, the map loads only about 50% of the time when the page is refreshed. This function involves a third-party RESTful API call to a maps provider. Amazon EC2 NAT instances are used for these outbound API calls.

Which of the following options are the MOST likely reason for this failure and the recommended solution?

A

This error is caused by an overloaded NAT instance in one of the subnets. Scale the EC2 NAT instances to larger-sized instances to ensure that they can handle the growing traffic.

This error is caused by failed NAT instance in one of the public subnets. Use NAT Gateways instead of EC2 NAT instances to ensure availability and scalability.

The error is caused by a failure in one of their availability zones in the VPC of the third-party provider. Contact the third-party provider support hotline and request for them to fix it.

One of the subnets in the VPC has a misconfigured Network ACL that blocks outbound traffic to the third-party provider. Update the network ACL to allow this connection and configure IAM permissions to restrict these changes in the future.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company runs hundreds of Windows-based Amazon EC2 instances on AWS. The Solutions Architect has been assigned to develop a workflow to ensure that the required patches of all Windows EC2 instances are properly identified and applied automatically. To maintain their system uptime requirements, it is of utmost importance to ensure that the EC2 instance reboots do not occur at the same time on all of their Windows instances. This is to avoid any loss of revenue that could be caused by any unavailability issues of their systems.

Which of the following will meet the above requirements?

A

Create two Patch Groups with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Create two CloudWatch Events rules which are configured to use a cron expression to automate the execution of patching for the two Patch Groups using the AWS Systems Manager Run command. Set up an AWS Systems Manager State Manager document to define custom commands which will be executed during patch execution.

Create a Patch Group with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on your patch group. Set up a maintenance window and associate it with your patch group. Assign the AWS-RunPatchBaseline document as a task within your maintenance window.

Create two Patch Groups with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Set up two non-overlapping maintenance windows and associate each with a different patch group. Using Patch Group tags, register targets with specific maintenance windows and lastly, assign the AWS-RunPatchBaseline document as a task within each maintenance window which has a different processing start time.

Create a Patch Group with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Create a CloudWatch Events rule configured to use a cron expression to automate the execution of patching in a given schedule using the AWS Systems Manager Run command. Set up an AWS Systems Manager State Manager document to define custom commands which will be executed during patch execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company plans to decommission its legacy web application that is hosted in AWS. It is composed of an Auto Scaling group of EC2 instances and an Application Load Balancer (ALB). The new application is built on a new framework. The solutions architect has been tasked to set up a new serverless architecture that comprises AWS Lambda, API Gateway, and DynamoDB. In addition, it is required to build a CI/CD pipeline to automate the build process and to support gradual deployments.

Which is the most suitable way to build, test, and deploy the new architecture in AWS?

A

Use the AWS Serverless Application Repository to organize related components, share configuration such as memory and timeouts between resources, and deploy all related resources together as a single, versioned entity.

Set up a CI/CD pipeline using CodeCommit, CodeBuild, CodeDeploy, and CodePipeline to build the CI/CD pipeline then use AWS Systems Manager Automation to automate the build process and support gradual deployments.

Use AWS Elastic Beanstalk to deploy the application.

Use AWS Serverless Application Model (AWS SAM) and set up AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline to build a CI/CD pipeline.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Four large banks in the country have collaborated to create a secure, simple-to-use, mobile payment app that enables users to easily transfer money and pay bills without much hassle. With the new mobile payment app, anyone can easily pay another person, split the bill with their friends, or pay for their coffee in an instant with just a few taps in the app. The payment app is available on both Android and iOS devices, including a web portal that is deployed in AWS using OpsWorks Stacks and EC2 instances. It was a big success with over 5 million users nationwide and has over 1000 transactions every hour. After one year, a new feature that will enable the users to store their credit card information in the app is ready to be added to the existing web portal. However, due to PCI-DSS compliance, the new version of the APIs and web portal cannot be deployed to the existing application stack.

How would the solutions architect deploy the new web portal for the mobile app without having any impact on 5 million users?

A

Create a new stack that contains the latest version of the web portal. Using Route 53 service, direct all the incoming traffic to the new stack at once so that all the customers get to access new features.

Deploy the new web portal using a Blue/Green deployment strategy with AWS CodeDeploy and Lambda in which the green environment represents the current web portal version serving production traffic while the blue environment is staged in running a different version of the web portal.

Deploy a new OpsWorks stack that contains a new layer with the latest web portal version. Shift traffic between existing stack and new stack, running different versions of the web portal using Blue/Green deployment strategy by using Route53. Route only a small portion of incoming production traffic to use the new application stack while maintaining the old application stack. Check the features of the new portal; once it’s 100% validated, slowly increase incoming production traffic to the new stack. If there are issues on the new stack, change Route53 to revert to old stack.

Forcibly upgrade the existing application stack in Production to be PCI-DSS compliant. Once done, deploy the new version of the web portal on the existing application stack.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A government agency has multiple VPCs in various AWS regions across the United States that need to be linked up to an on-premises central office network in Washington, D.C. The central office requires inter-region VPC access over a private network that is dedicated to each region for enhanced security and more predictable data transfer performance. Your team is tasked to quickly build this network mesh and to minimize the management overhead to maintain these connections.

Which of the following options is the most secure, highly available, and durable solution that you should use to set up this kind of interconnectivity?

A

Implement a hub-and-spoke network topology in each region that routes all traffic through a network transit center using AWS Transit Gateway. Route traffic between VPCs and the on-premise network over AWS Site-to-Site VPN.

Create a link aggregation group (LAG) in the central office network to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Use AWS Direct Connect Gateway to achieve inter-region VPC access to all of your AWS resources. Create a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway.

Utilize AWS Direct Connect Gateway for inter-region VPC access. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.

Enable inter-region VPC peering which allows peering relationships to be established between VPCs across different AWS regions. This will ensure that the traffic will always stay on the global AWS backbone and will never traverse the public Internet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company has a hybrid set up for its mobile application. The on-premises data center hosts a 3TB MySQL database server that handles the write-intensive requests from the application. The on-premises network is connected to the AWS VPC with a VPN. On AWS, the serverless application runs on AWS Lambda and API Gateway with an Amazon DynamoDB table used for saving user preferences. The application scales well as more users are using the mobile app. The user traffic is unpredictable but there is an average increase of about 20% each month. A few months into operation, the company noticed the exponential increase of costs for AWS Lambda. The Solutions Architect noticed that the Lambda execution time averages 4.5 minutes and most of that is wait time due to latency when calling the on-premises data MySQL server.

Which of the following solutions should the Solutions Architect implement to reduce the overall cost?

A
  1. Migrate the on-premises MySQL database server to Amazon RDS for MySQL. Enable Multi-AZ to ensure high availability.
  2. Create a CloudFront distribution with the API Gateway as the origin to cache the API responses and reduce the Lambda invocations.
  3. Gradually lower the timeout and memory properties of the Lambda functions without increasing the execution time.
  4. Configure Auto Scaling on Amazon DynamoDB to automatically adjust the capacity with user traffic and enable DynamoDB Accelerator to cache frequently accessed records.
  5. Provision an AWS Direct Connect connection from the on-premises data center to Amazon VPC instead of a VPN to significantly reduce the network latency to the MySQL server.
  6. Create a CloudFront distribution with the API Gateway as the origin to cache the API responses and reduce the Lambda invocations.
  7. Convert the Lambda functions to run them on Amazon EC2 Reserved Instances. Use Auto Scaling on peak time with a combination of Spot instances to further reduce costs.
  8. Configure Auto Scaling on Amazon DynamoDB to automatically adjust the capacity with user traffic.

1.Provision an AWS Direct Connect connection from the on-premises data center to Amazon VPC instead of a VPN to significantly reduce the network latency to the MySQL server.
2. Configure caching on the mobile application to reduce the overall AWS Lambda function calls.
3. Gradually lower the timeout and memory properties of the Lambda functions without increasing the execution time.
4. Add an Amazon Elasticache cluster in front of DynamoDB to cache the frequently accessed records.

  1. Migrate the on-premises MySQL database server to Amazon RDS for MySQL. Enable Multi-AZ to ensure high availability.
  2. Configure API caching on Amazon API Gateway to reduce the overall number of invocations to the Lambda functions.
  3. Gradually lower the timeout and memory properties of the Lambda functions without increasing the execution time.
  4. Configure Auto Scaling on Amazon DynamoDB to automatically adjust the capacity based on user traffic.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An international foreign exchange company has a serverless forex trading application that was built using AWS SAM and is hosted on AWS Serverless Application Repository. They have millions of users worldwide who use their online portal 24/7 to trade currencies. However, they are receiving a lot of complaints that it takes a few minutes for their users to log in to their portal lately, including occasional HTTP 504 errors. As the Solutions Architect, you are tasked to optimize the system and to significantly reduce the time to log in to improve the customers’ satisfaction.

Which of the following should you implement in order to improve the performance of the application with minimal cost? (Select TWO.)

A

Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses.

Set up multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. Deploy the Lambda function in each region using AWS SAM, in order to handle the requests faster.

Use Lambda@Edge to allow your Lambda functions to customize content that CloudFront delivers and to execute the authentication process in AWS locations closer to the users.

Increase the cache hit ratio of your CloudFront distribution by configuring your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age.

Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An IT consultancy company has multiple offices located in San Francisco, Frankfurt, Tokyo, and Manila. The company is using AWS Organizations to easily manage its several AWS accounts which are being used by its regional offices and subsidiaries. A new AWS account was recently added to a specific organizational unit (OU) which is responsible for the overall systems administration. The solutions architect noticed that the account is using a root-created Amazon ECS Cluster with an attached service-linked role. For regulatory purposes, the solutions architect created a custom SCP that would deny the new account from performing certain actions in relation to using ECS. However, after applying the policy, the new account could still perform the actions that it was supposed to be restricted from doing.

Which of the following is the most likely reason for this problem?

A

The default SCP grants all permissions attached to every root, OU, and account. To apply stricter permissions, this policy is required to be modified.

There is an SCP attached to a higher-level OU that permits the actions of the service-linked role. This permission would therefore be inherited by the current OU, and override the SCP placed by the administrator.

The ECS service is being run outside the jurisdiction of the organization. SCPs affect only the principals that are managed by accounts that are part of the organization.

SCPs do not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can’t be restricted by SCPs.

17
Q

A multi-national tech company has multiple VPCs assigned for each of its IT departments. VPC peering has been set up whenever intercommunication is needed between the VPCs. The solutions architect has been instructed to launch a new central database server that can be accessed by the other VPCs of the company using the database.tutorialsdojo.com domain name. This server should only be resolvable and accessible within the associated VPCs since only internal applications will be using the database.

Which of the following options should the solutions architect implement to meet the above requirements?

A

Set up a private hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create an A record with a value of database.tutorialsdojo.com which maps to the IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to true and the enableDnsSupport attribute to true

Set up a public hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create a CNAME record with a value of database.tutorialsdojo.com which maps to the IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to false and the enableDnsSupport attribute to false

Set up a private hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create an A record with a value of database.tutorialsdojo.com which maps to the Elastic IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to true and the enableDnsSupport attribute to false

Set up a public hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create an A record with a value of database.tutorialsdojo.com which maps to the IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to true and the enableDnsSupport attribute to true

18
Q

A company is developing an online voting application for a photo competition. The infrastructure is deployed in AWS using CloudFormation. The application accepts high-quality images of each contestant and stores them in S3 then records the information about the image as well as the contestant’s profile in RDS. After the competition, the CloudFormation stack is not used anymore, and to save costs, the stack can be terminated. The manager instructed the solutions architect to back up the RDS database and the S3 bucket so the data can still be used even after the CloudFormation template is deleted.

Which of the following options is the MOST suitable solution to fulfill this requirement?

A

Set the DeletionPolicy on the S3 bucket to snapshot.

Set the DeletionPolicy on the RDS resource to snapshot and set the S3 bucket to retain.

Set the DeletionPolicy for the RDS instance to snapshot and then enable S3 bucket replication on the source bucket to a destination bucket to maintain a copy of all the S3 objects.

Set the DeletionPolicy to retain on both the RDS and S3 resource types on the CloudFormation template.

19
Q

A leading media company has a hybrid architecture where its on-premises data center is connected to AWS via a Direct Connect connection. They also have a repository of over 50-TB digital videos and media files. These files are stored on their on-premises tape library and are used by their Media Asset Management (MAM) system. Due to the sheer size of their data, they want to implement an automated catalog system that will enable them to search their files using facial recognition. A catalog will store the faces of the people who are present in these videos including a still image of each person. Eventually, the media company would like to migrate these media files to AWS including the MAM video contents.

Which of the following options provides a solution which uses the LEAST amount of ongoing management overhead and will cause MINIMAL disruption to the existing system?

A

Request for an AWS Snowball Storage Optimized device to migrate all of the media files from the on-premises library into Amazon S3. Provision a large EC2 instance and allow it to access the S3 bucket. Install an open-source facial recognition tool on the instance like OpenFace or OpenCV. Process the media files to retrieve the metadata and push this information into the MAM solution. Lastly, copy the media files to another S3 bucket.

Set up a tape gateway appliance on-premises and connect it to your AWS Storage Gateway. Configure the MAM solution to fetch the media files from the current archive and push them into the tape gateway to be stored in Amazon Glacier. Using Amazon Rekognition, build a collection from the catalog of faces. Utilize a Lambda function which invokes the Rekognition Javascript SDK to have Amazon Rekognition process the video directly from the tape gateway in real-time, retrieve the required metadata, and push the metadata into the MAM solution.

Use Amazon Kinesis Video Streams to set up a video ingestion stream and with Amazon Rekognition, build a collection of faces. Stream the media files from the MAM solution into Kinesis Video Streams and configure the Amazon Rekognition to process the streamed files. Launch a stream consumer to retrieve the required metadata, and push the metadata into the MAM solution. Finally, configure the stream to store the files in an S3 bucket.

Integrate the file system of your local data center to AWS Storage Gateway by setting up a file gateway appliance on-premises. Utilize the MAM solution to extract the media files from the current data store and send them into the file gateway. Build a collection using Amazon Rekognition by populating a catalog of faces from the processed media files. Use an AWS Lambda function to invoke Amazon Rekognition Javascript SDK to have it fetch the media file from the S3 bucket which is backing the file gateway, retrieve the needed metadata, and finally, persist the information into the MAM solution.

20
Q

A multinational financial firm plans to do a multi-regional deployment of its cryptocurrency trading application that’s being heavily used in the US and in Europe. The containerized application uses Kubernetes and has Amazon DynamoDB Global Tables as a centralized database to store and sync the data from two regions.

The architecture has distributed computing resources with several public-facing Application Load Balancers (ALBs). The Network team of the firm manages the public DNS internally and wishes to make the application available through an apex domain for easier access. S3 Multi-Region Access Points are also used for object storage workloads and hosting static assets.

Which is the MOST operationally efficient solution that the Solutions Architect should implement to meet the above requirements?

A

Set up an AWS Transit Gateway with a multicast domain that targets specific ALBs on the required AWS Regions. Create a public record in Amazon Route 53 using the static IP address of the AWS Transit Gateway.

Launch an AWS Transit Gateway that targets specific ALBs on the required AWS Regions. Create a CNAME record in Amazon Route 53 that directly points your custom domain name to the DNS name assigned to the AWS Transit Gateway.

Set up an AWS Global Accelerator, which has several endpoint groups that target specific endpoints and ALBs on the required AWS Regions. Create a public alias record in Amazon Route 53 that points your custom domain name to the DNS name assigned to your accelerator.

Launch an AWS Global Accelerator with several endpoint groups that target the ALBs in all the relevant AWS Regions. Create an Amazon Route 53 Resolver Inbound Endpoint that points your custom domain name to the CNAME assigned to your accelerator.

21
Q

A company uses Lightweight Directory Access Protocol (LDAP) for its employee authentication and authorization. The company plans to release a mobile app that can be installed on employee’s smartphones. The mobile application will allow users to have federated access to AWS resources. Due to strict security and compliance requirements, the mobile application must use a custom-built solution for user authentication. It must also use IAM roles for granting user permissions to AWS resources. The Solutions Architect was tasked to create a solution that meets these requirements.

Which of the following options should the Solutions Architect implement to enable authentication and authorization for the application? (Select TWO.)

A

Build a custom OpenID Connect-compatible solution in combination with AWS IAM Identity Center to create authentication and authorization functionality for the application.

Build a custom LDAP connector using Amazon API Gateway with AWS Lambda function for user authentication. Use Amazon DynamoDB to store user authorization tokens. Write another Lambda function that will validate user authorization requests based on the token stored on DynamoDB.

Build a custom SAML-compatible solution to handle authentication and authorization. Configure the solution to use LDAP for user authentication and use SAML assertion to perform authorization to the IAM identity provider.

Build a custom OpenID Connect-compatible solution for the user authentication functionality. Use Amazon Cognito Identity Pools for authorizing access to AWS resources.
Build a custom SAML-compatible solution for user authentication. Leverage AWS IAM Identity Center for authorizing access to AWS resources.

22
Q

A company is planning to build its new customer relationship management (CRM) portal in AWS. The application architecture will be using a containerized microservices hosted on an Amazon ECS cluster. A Solutions Architect has been tasked to set up the architecture and comply with the AWS security best practice of granting the least privilege. The architecture should also support the use of security groups and standard network monitoring tools at the container level to comply with the company’s strict IT security policies.

Which of the following provides the MOST secure configuration for the CRM portal?

A

Use the awsvpc network mode in the task definition in your Amazon ECS Cluster. Attach security groups to the ECS tasks then pass IAM credentials into the container at launch time to access other AWS resources.

Use the awsvpc network mode in the task definition in your Amazon ECS Cluster. Attach security groups to the ECS tasks then use IAM roles for tasks to access other resources.

Use AWS App Runner to run the containerized application instead to improve security and reduce operational overhead. Select VPC and security groups accordingly for deployment. Add IAM credentials to the environment variables when launching the service.

Use the bridge network mode in the task definition in your Amazon ECS Cluster. Attach security groups to Amazon EC2 instances then use IAM roles for EC2 instances to access other resources.

23
Q

A company is migrating an interactive car registration web system hosted on its on-premises network to AWS Cloud. The current architecture of the system consists of a single NGINX web server and a MySQL database running on a Fedora server, which both reside in their on-premises data center. For the new cloud architecture, a load balancer must be used to evenly distribute the incoming traffic to the application servers. Route 53 must be used for both domain registration and domain management.

In this scenario, what would be the most efficient way to transfer the web application to AWS?

A
  1. Use the AWS Application Migration Service (MGN) to create an EC2 AMI of the NGINX web server.
  2. Configure auto-scaling to launch in two Availability Zones.
  3. Launch a multi-AZ MySQL Amazon RDS instance in one availability zone only.
  4. Import the data into Amazon RDS from the latest MySQL backup.
  5. Create an ELB to front your web servers
  6. Use Amazon Route 53 and create an A record pointing to the elastic load balancer.
  7. Launch two NGINX EC2 instances in two Availability Zones.
  8. Copy the web files from the on-premises web server to each Amazon EC2 web server, using Amazon S3 as the repository.
  9. Migrate the database using the AWS Database Migration Service.
  10. Create an ELB to front your web servers.
  11. Use Route 53 and create an alias A record pointing to the ELB.
  12. Use the AWS Application Discovery Service to migrate the NGINX web server.
  13. Configure Auto Scaling to launch two web servers in two Availability Zones.
  14. Launch a Multi-AZ MySQL Amazon Relational Database Service (RDS) instance in one Availability Zone only.
  15. Import the data into Amazon RDS from the latest MySQL backup.
  16. Use Amazon Route 53 to create a private hosted zone and point a non-alias A record to the ELB.
  17. Export web files to an Amazon S3 bucket in one Availability Zone using AWS Migration Hub.
  18. Run the website directly out of Amazon S3.
  19. Migrate the database using the AWS Database Migration Service and AWS Schema Conversion Tool (AWS SCT).
  20. Use Route 53 and create an alias record pointing to the ELB.
24
Q

A media company uses the AWS Cloud to process and convert its video collection. An Auto Scaling group of Amazon EC2 instances processes the videos and scales based on the number of messages in an Amazon Simple Queue Service (SQS) queue. These SQS messages contain links to the videos, each taking about 20-40 minutes to process.

The management has set a redrive policy on the SQS queue to send failed messages to a dead-letter queue. The visibility timeout has been set to 1 hour, and the maxReceiveCount has been set to 1. When there are messages on the dead-letter queue, an Amazon CloudWatch alarm has been set up to notify the development team.

Within a few days of operation, the dead-letter queue received several videos that failed to process. The developers did not find any operational errors in the application logs and confirmed that no videos exceeded the expected processing time. Upon examining the CloudTrail logs, the team noted that the application was making repeated ReceiveMessage API calls in quick succession for specific videos, indicating retry attempts.

Which of the following options should the solutions architect implement to help solve the above problem?

A

The videos were not processed because the Amazon EC2 scale-up process takes too long. Set a minimum number of EC2 instances on the Auto Scaling group to solve this.

Update the visibility timeout for the Amazon SQS queue to 2 hours to solve this problem.

Configure a higher delivery delay setting on the Amazon SQS queue. This will give time for the consumers more time to pick up the messages on the SQS queue.

Reconfigure the SQS redrive policy and set maxReceiveCount to 10. This will allow the consumers to retry the messages before sending them to the dead-letter queue.