SA Mastery 1 Flashcards
(35 cards)
An application consists of multiple Amazon EC2 instances in private subnets in different availability zones. The application uses a single NAT Gateway for downloading software patches from the Internet to the instances. There is a requirement to protect the application from a single point of failure when the NAT Gateway encounters a failure or if its availability zone goes down.
How should the Solutions Architect redesign the architecture to be more highly available and cost-effective?
*
A) Create three NAT Gateways in each availability zone. Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone.
B) Create two NAT Gateways in each availability zone. Configure the route table in each public subnet to ensure that instances use the NAT Gateway in the same availability zone.
C) Create a NAT Gateway in each availability zone. Configure the route table in each public subnet to ensure that instances use the NAT Gateway in the same availability zone.
D) Create a NAT Gateway in each availability zone. Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone
D) Create a NAT Gateway in each availability zone. Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone
If you have resources in multiple Availability Zones and they share one NAT gateway, and if the NAT gateway’s Availability Zone is down, resources in the other Availability Zones lose Internet access. To create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.
Hence, the correct answer is: Create a NAT Gateway in each availability zone. Configure the route table in each private subnet to ensure that instances use the NAT Gateway in the same availability zone.
A company recently launched an e-commerce application that is running in eu-east-2 region, which strictly requires six EC2 instances running at all times. In that region, there are 3 Availability Zones (AZ) that you can use – eu-east-2a, eu-east-2b, and eu-east-2c.
Which of the following deployments provide 100% fault tolerance if any single AZ in the region becomes unavailable? (Select TWO.)
*
A) eu-east-2a with three EC2 instances, eu-east-2b with three EC2 instances, and eu-east-2c with three EC2 instances
B) eu-east-2a with two EC2 instances, eu-east-2b with four EC2 instances, and eu-east-2c with two EC2 instances
C) eu-east-2a with four EC2 instances, eu-east-2b with two EC2 instances, and eu-east-2c with two EC2 instances
D) eu-east-2a with two EC2 instances, eu-east-2b with two EC2 instances, and eu-east-2c with two EC2 instances
E) eu-east-2a with six EC2 instances, eu-east-2b with six EC2 instances, and eu-east-2c with no EC2 instances
A) eu-east-2a with three EC2 instances, eu-east-2b with three EC2 instances, and eu-east-2c with three EC2 instances
E) eu-east-2a with six EC2 instances, eu-east-2b with six EC2 instances, and eu-east-2c with no EC2 instances
Fault Tolerance is the ability of a system to remain in operation even if some of the components used to build the system fail. In AWS, this means that in the event of server fault or system failures, the number of running EC2 instances should not fall below the minimum number of instances required by the system for it to work properly. So if the application requires a minimum of 6 instances, there should be at least 6 instances running in case there is an outage in one of the Availability Zones or if there are server issues.
In this scenario, you have to simulate a situation where one Availability Zone became unavailable for each option and check whether it still has 6 running instances.
Hence, the correct answers are: eu-east-2a with six EC2 instances, eu-east-2b with six EC2 instances, and eu-east-2c with no EC2 instances and eu-east-2a with three EC2 instances, eu-east-2b with three EC2 instances, and eu-east-2c with three EC2 instances because even if one of the availability zones were to go down, there would still be 6 active instances.
Due to the large volume of query requests, the database performance of an online reporting application significantly slowed down. The Solutions Architect is trying to convince her client to use Amazon RDS Read Replica for their application instead of setting up a Multi-AZ Deployments configuration.
What are two benefits of using Read Replicas over Multi-AZ that the Architect should point out? (Select TWO.)
*
A) Allows both read and write operations on the read replica to complement the primary database.
B) Provides synchronous replication and automatic failover in the case of Availability Zone service failures.
C) It elastically scales out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
D) Provides asynchronous replication and improves the performance of the primary database by taking read-heavy database workloads from it.
E) It enhances the read performance of your primary database by increasing its IOPS and accelerates its query processing via AWS Global Accelerator.
C) It elastically scales out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
D) Provides asynchronous replication and improves the performance of the primary database by taking read-heavy database workloads from it.
The option that says: Allows both read and write operations on the read replica to complement the primary database is incorrect, as Read Replicas are primarily used to offload read-only operations from the primary database instance. By default, you can’t do a write operation to your Read Replica.
The option that says: Provides synchronous replication and automatic failover in the case of Availability Zone service failures is incorrect as this is a benefit of Multi-AZ and not of a Read Replica. Moreover, Read Replicas provide an asynchronous type of replication and not synchronous replication.
The option that says: It enhances the read performance of your primary database by increasing its IOPS and accelerates its query processing via AWS Global Accelerator is incorrect because Read Replicas do not do anything to upgrade or increase the read throughput on the primary DB instance per se, but it provides a way for your application to fetch data from replicas. In this way, it improves the overall performance of your entire database tier (and not just the primary DB instance). It doesn’t increase the IOPS nor use AWS Global Accelerator to accelerate the compute capacity of your primary database. AWS Global Accelerator is a networking service not related to RDS that directs user traffic to the nearest application endpoint to the client, thus reducing internet latency and jitter. It simply routes the traffic to the closest edge location via Anycast.
A solutions architect is managing an application that runs on a Windows EC2 instance with an attached Amazon FSx for Windows File Server. To save cost, management has decided to stop the instance during off-hours and restart it only when needed. It has been observed that the application takes several minutes to become fully operational which impacts productivity.
How can the solutions architect speed up the instance’s loading time without driving the cost up?
*
A) Migrate the application to an EC2 instance with hibernation enabled.
B) Enable the hibernation mode on the EC2 instance.
C) Migrate the application to a Linux-based EC2 instance.
D) Disable the Instance Metadata Service to reduce the things that need to be loaded at startup.
A) Migrate the application to an EC2 instance with hibernation enabled.
The option that says: Migrate the application to a Linux-based EC2 instance is incorrect. This does not guarantee a faster load time. Moreover, it is a risky thing to do as the application might have dependencies tied to the previous operating system that won’t work on a different OS.
The option that says: Enable the hibernation mode on the EC2 instance is incorrect. It is not possible to enable or disable hibernation for an instance after it has been launched.
The option that says: Disable the instance metadata service to reduce the things that need to be loaded at startup is incorrect. This won’t affect the startup load time at all. The Instance Metadata Service is just a service that you can access over the network from within an EC2 instance.
An online stock trading application stores financial data in an Amazon S3 bucket, with a lifecycle policy that moves older data to Glacier every month. A strict compliance requirement mandates that a surprise audit can occur at any time, and the required data must be retrievable in under 15 minutes under all circumstances. The manager has instructed that retrieval capacity be available when needed and should support up to 150 MB/s of retrieval throughput.*
A) Use Expedited Retrieval to access the financial data.
B) Specify a range, or portion, of the financial data archive to retrieve.
C) Use Bulk Retrieval to access the financial data.
Purchase provisioned retrieval capacity.
D) Use Standard Retrieval for accessing the financial data.
A) Use Expedited Retrieval to access the financial data.
B) Specify a range, or portion, of the financial data archive to retrieve.
C) Use Bulk Retrieval to access the financial data.
Purchase provisioned retrieval capacity.
The option that says: Use Standard Retrieval for accessing the financial data is incorrect because standard retrieval typically takes 3–5 hours, which does not meet the audit’s rapid retrieval time requirement.
The option that says: Use Bulk Retrieval to access the financial data is incorrect because bulk retrievals typically complete within 5–12 hours hence, this does not satisfy the requirement of retrieving the data within 15 minutes. The provisioned capacity option is also not compatible with Bulk retrievals.
The option that says: Specify a range, or portion, of the financial data archive to retrieve is incorrect because using ranged archive retrievals is not enough to meet the requirement of retrieving the whole archive in the given timeframe. In addition, it does not primarily provide additional retrieval capacity which is what the provisioned capacity option can offer.
A multinational corporate and investment bank regularly processes steady workloads of accruals, loan interests, and other critical financial calculations every night from 10 PM to 3 AM on their on-premises data center for their corporate clients. Once the process is done, the results are then uploaded to the Oracle General Ledger which means that the processing should not be delayed or interrupted. The CTO has decided to move its IT infrastructure to AWS to save costs. The company needs to reserve compute capacity in a specific Availability Zone to properly run their workloads.
As the Senior Solutions Architect, how can you implement a cost-effective architecture in AWS for their financial system?
*
A) Use On-Demand Capacity Reservations, which provide compute capacity that is always available on the specified recurring schedule.
B) Use Regional Reserved Instances to reserve capacity on a specific
Availability Zone and lower the operating cost through its billing discounts.
C) Use Dedicated Hosts, which provide a physical host that is fully dedicated to running your instances, and bring your existing per-socket, per-core, or per-VM software licenses to reduce costs.
D) Use On-Demand EC2 instances which allows you to pay for the instances that you launch and use by the second. Reserve compute capacity in a specific Availability Zone to avoid any interruption.
A) Use On-Demand Capacity Reservations, which provide compute capacity that is always available on the specified recurring schedule.
When you create a Capacity Reservation, you specify:
– The Availability Zone in which to reserve the capacity
– The number of instances for which to reserve capacity
– The instance attributes, including the instance type, tenancy, and platform/OS
Capacity Reservations can only be used by instances that match their attributes. By default, they are automatically used by running instances that match the attributes. If you don’t have any running instances that match the attributes of the Capacity Reservation, it remains unused until you launch an instance with matching attributes.
In addition, you can use Savings Plans and Regional Reserved Instances with your Capacity Reservations to benefit from billing discounts. AWS automatically applies your discount when the attributes of a Capacity Reservation match the attributes of a Savings Plan or Regional Reserved Instance.
In this scenario, the company only runs the process for 5 hours (from 10 PM to 3 AM) every night. By usinng Capacity Reservations, they not only ensure availability but can also implement automation to procure and cancel capacity, as well as terminate instances once they are no longer needed. This approach prevents them from incurring unnecessary charges, ensuring they are billed only for the resources they actually use.
Hence, the correct answer is to use On-Demand Capacity Reservations, which provide compute capacity that is always available on the specified recurring schedule.
A company needs to implement a solution that will process real-time streaming data of its users across the globe. This will enable them to track and analyze globally-distributed user activity on their website and mobile applications, including clickstream analysis. The solution should process the data in close geographical proximity to their users and respond to user requests at low latencies.
Which of the following is the most suitable solution for this scenario?
*
A) Use a CloudFront web distribution and Route 53 with a latency-based routing policy, in order to process the data in close geographical proximity to users and respond to user requests at low latencies.
Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
B) Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies.
Process real-time streaming data using Amazon Athena and durably store the results to an Amazon S3 bucket.
C) Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
D) Use a CloudFront web distribution and Route 53 with a Geoproximity routing policy in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
C) Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
By using Lambda@Edge and Kinesis together, you can process real-time streaming data so that you can track and analyze globally-distributed user activity on your website and mobile applications, including clickstream analysis. Hence, the correct answer in this scenario is the option that says: Integrate CloudFront with Lambda@Edge in order to process the data in close geographical proximity to users and respond to user requests at low latencies. Process real-time streaming data using Kinesis and durably store the results to an Amazon S3 bucket.
A company has two On-Demand EC2 instances inside the Virtual Private Cloud in the same Availability Zone but are deployed to different subnets. One EC2 instance is running a database and the other EC2 instance a web application that connects with the database. You need to ensure that these two instances can communicate with each other for the system to work properly.
What are the things you have to check so that these EC2 instances can communicate inside the VPC? (Select TWO.)
*
A) Check the Network ACL if it allows communication between the two subnets.
B) Check if both instances are the same instance class.
C) Ensure that the EC2 instances are in the same Placement Group.
D) Check if the default route is set to a NAT instance or Internet Gateway (IGW) for them to communicate.
E) Check if all security groups are set to allow the application host to communicate to the database on the right port and protocol.
A) Check the Network ACL if it allows communication between the two subnets.
E) Check if all security groups are set to allow the application host to communicate to the database on the right port and protocol.
First, the Network ACL should be properly set to allow communication between the two subnets. The security group should also be properly configured so that your web server can communicate with the database server.
Hence, these are the correct answers:
Check if all security groups are set to allow the application host to communicate to the database on the right port and protocol.
Check the Network ACL if it allows communication between the two subnets.
A car dealership website hosted in Amazon EC2 stores car listings in an Amazon Aurora database managed by Amazon RDS. Once a vehicle has been sold, its data must be removed from the current listings and forwarded to a distributed processing system.
Which of the following options can satisfy the given requirement?
*
A) Create a native function or a stored procedure that invokes an AWS Lambda function. Configure the Lambda function to send event notifications to an Amazon SQS queue for the processing system to consume.
B) Create an RDS event subscription and send the notifications to Amazon SQS. Configure the SQS queues to fan out the event notifications to multiple Amazon SNS topics. Process the data using AWS Lambda functions.
C) Create an RDS event subscription and send the notifications to AWS Lambda. Configure the Lambda function to fanout the event notifications to multiple Amazon SQS queues to update the target groups.
D) Create an RDS event subscription and send the notifications to Amazon SNS. Configure the SNS topic to fan out the event notifications to multiple Amazon SQS queues. Process the data using AWS Lambda functions.
A) Create a native function or a stored procedure that invokes an AWS Lambda function. Configure the Lambda function to send event notifications to an Amazon SQS queue for the processing system to consume.
The option that says: Create an RDS event subscription and send the notifications to Amazon SQS. Configure the SQS queues to fan out the event notifications to multiple Amazon SNS topics. Process the data using AWS Lambda functions is incorrect because RDS event subscriptions typically notify about operational changes rather than data modifications. This method does not capture database modifications like INSERT, DELETE, or UPDATE.
The option that says: Create an RDS event subscription and send the notifications to AWS Lambda. Configure the Lambda function to fan out the event notifications to multiple Amazon SQS queues to update the processing system is incorrect because RDS event subscriptions primarily focus on operational-level changes rather than capturing direct data modifications.
The option that says: Create an RDS event subscription and send the notifications to Amazon SNS. Configure the SNS topic to fan out the event notifications to multiple Amazon SQS queues. Process the data using AWS Lambda functions is incorrect because RDS event subscriptions only track infrastructure-related events and not actual database changes.
An enterprise company uses multiple AWS accounts for different business units. The AWS accounts are set up and consolidated into an organization via the AWS Organizations service.
The company sites are distributed globally across different countries and regions. There is a need to centrally manage security group rules across the organization to allow CIDR ranges of new office locations and remove old CIDR ranges as needed.
What design should the solutions architect propose to meet the requirements in the MOST cost-effective manner?
*
A) Enable Route 53 Application Recovery Controller (Route 53 ARC) in the AWS account. Create a Recovery Control Panel to define the routing control states and configurations of the CIDR ranges of each business unit or company site. Define routing control states within the RCP to indicate how traffic should be routed. Enable Zonal Shift functionality in Route 53 ARC to shift traffic from one set of resources to another with the defined CIDR ranges.
B) Leverage AWS Firewall Manager to create a common security group policy. Select the security groups previously created as the primary group in the policy.
C) Build an AWS-managed prefix list in Amazon VPC containing the CIDR blocks to allow or block. Enable Security Hub for the AWS accounts and define a policy that specifies the desired security group updates. Create a Lambda function to call the modify_managed_prefix_list API that can be triggered by Amazon EventBridge when updating the CIDR blocks.
D) Provision a VPC customer-managed prefix list using the AWS CLI or the Amazon VPC console and add the CIDR blocks to be included in the list. Share the prefix list ID to other AWS accounts using the AWS RAM (Resource Access Manager) API, or the AWS RAM Console. Add the prefix list to the security groups used across the organization.
D) Provision a VPC customer-managed prefix list using the AWS CLI or the Amazon VPC console and add the CIDR blocks to be included in the list. Share the prefix list ID to other AWS accounts using the AWS RAM (Resource Access Manager) API, or the AWS RAM Console. Add the prefix list to the security groups used across the organization.There are two types of prefix lists:
Customer-managed prefix lists — Sets of IP address ranges that you define and manage. You can share your prefix list with other AWS accounts, enabling those accounts to reference the prefix list in their own resources.
AWS-managed prefix lists — Sets of IP address ranges for AWS services. You cannot create, modify, share, or delete an AWS-managed prefix list.
AWS Resource Access Manager (AWS RAM) allows the sharing of resources by creating a resource share, including customer-managed prefix lists. Access to shared resources can be limited to AWS Regions, principals, and can be attached with IAM permissions making resource sharing secure. There is no additional cost associated with using a customer-managed prefix list, other than the standard charges for Amazon VPC.
Hence, the correct answer is: Provision a VPC customer-managed prefix list using the AWS CLI or the Amazon VPC console and add the CIDR blocks to be included in the list. Share the prefix list ID to other AWS accounts using the AWS RAM (Resource Access Manager) API, or the AWS RAM Console. Add the prefix list to the security groups used across the organization.
A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication.
Which of the following options can satisfy the given requirement?
*
A) Create a Network File System (NFS) file share using AWS Storage Gateway.
B) Create a file system using Amazon EFS and join it to an Active Directory domain.
C) Launch an Amazon EC2 Windows Server to mount a new Amazon S3 bucket as a file volume.
D) Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.
D) Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.
Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration. Amazon FSx is accessible from Windows, Linux, and MacOS compute instances and devices. Thousands of compute instances and devices can access a file system concurrently.
Amazon FSx works with Microsoft Active Directory to integrate with your existing Microsoft Windows environments. You have two options to provide user authentication and access control for your file system: AWS Managed Microsoft Active Directory and Self-managed Microsoft Active Directory.
Take note that after you create an Active Directory configuration for a file system, you can’t change that configuration. However, you can create a new file system from a backup and change the Active Directory integration configuration for that file system. These configurations allow the users in your domain to use their existing identity to access the Amazon FSx file system and to control access to individual files and folders.
Hence, the correct answer is: Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.
A multinational manufacturing company has multiple accounts in AWS to separate their various departments such as finance, human resources, engineering and many others. There is a requirement to ensure that certain access to services and actions are properly controlled to comply with the security policy of the company.
As the Solutions Architect, which is the most suitable way to set up the multi-account AWS environment of the company?
*
A) Set up a common IAM policy that can be applied across all AWS accounts.
B)Connect all departments by setting up a cross-account access to each of the AWS accounts of the company. Create and attach IAM policies to your resources based on their respective departments to control access.
C) Use AWS Organizations and Service Control Policies to control services on each account.
D) Provide access to externally authenticated users via Identity Federation. Set up an IAM role to specify permissions for users from each department whose identity is federated from your organization or a third-party identity provider.
C) Use AWS Organizations and Service Control Policies to control services on each account.
Using AWS Organizations and Service Control Policies to control services on each account is the correct answer.
AWS Organizations offers policy-based management for multiple AWS accounts. With Organizations, you can create groups of accounts, automate account creation, apply and manage policies for those groups. Organizations enable you to centrally manage policies across multiple accounts without requiring custom scripts and manual processes. It allows you to create Service Control Policies (SCPs) that centrally control AWS service use across multiple AWS accounts.
An animation company conducts storyboard experiments using a Linux-based rendering engine and an editing application running on Windows. The rendering engine saves its output on a Network File System (NFS) share, while the editing application uses a Server Message Block (SMB) file system.
To share files between these applications, the company synchronizes data across the file systems. This method doubles their storage needs and causes difficulty in data management. The company wants to migrate its environment to AWS to solve these issues.
How can the company meet the requirements with the LEAST amount of changes?
*
A) Use AWS Lambda for both applications. Configure both applications to save data in an Amazon SQS queue.
B) Use Linux Amazon EC2 instances for the rendering engine and Windows EC2 instances for the editing application. Set up Amazon FSx for NetApp ONTAP for storage.
C) Use Amazon Elastic Container Service (ECS) for both applications. Set up Amazon FSx for Lustre for storage.
D) Use Linux Amazon EC2 instances for the rendering engine and Windows EC2 instances for the editing application. Set up Amazon Elastic File System (EFS) and Amazon FSx for Windows File Server for storage.
B) Use Linux Amazon EC2 instances for the rendering engine and Windows EC2 instances for the editing application. Set up Amazon FSx for NetApp ONTAP for storage.
The option that says: Use Amazon Elastic Container Service (ECS) for both applications. Set up Amazon FSx for Lustre for storage is incorrect because using ECS would require some code change to containerize the workloads. Moreover, Amazon FSx for Lustre only works for Linux-based instances and is not compatible with Windows. Hence, you would have to reconfigure the editing application to work with a Lustre file system.
The option that says: Use AWS Lambda for both applications. Configure both applications to save data in an Amazon SQS queue is incorrect because, unlike the rendering engine, the editing application cannot be modeled as an event-driven system, so using AWS Lambda as a single solution for both workloads is not viable. Additionally, SQS is not suitable for data sharing between applications due to its design and intended use case. SQS operates on a message-based model where messages are sent to a queue, stored temporarily, and then consumed by another component. Building around it as storage for both applications is impractical and would require extensive customization and workarounds.
The option that says: Use Linux Amazon EC2 instances for the rendering engine and Windows EC2 instances for the editing application. Set up Amazon Elastic File System (EFS) and Amazon FSx for Windows File Server for storage is incorrect. This approach would still involve managing two separate storage systems for NFS and SMB, which is no different than the original setup. Also, it would require developing a solution to enable the applications to access each other’s output.
A company is hosting an application on EC2 instances that regularly pushes and fetches data in Amazon S3. Due to a change in compliance, the instances need to be moved on a private subnet. Along with this change, the company wants to lower the data transfer costs by configuring its AWS resources.
How can this be accomplished in the MOST cost-efficient manner?
*
A) Create an Amazon S3 interface endpoint to enable a connection between the instances and Amazon S3.
B) Set up a NAT Gateway in the public subnet to connect to Amazon S3.
C) Create an Amazon S3 gateway endpoint to enable a connection between the instances and Amazon S3.
D) Set up an AWS Transit Gateway to access Amazon S3.
C) Create an Amazon S3 gateway endpoint to enable a connection between the instances and Amazon S3.
VPC endpoints for Amazon S3 simplify access to S3 from within a VPC by providing configurable and highly reliable secure connections to S3 that do not require an internet gateway or Network Address Translation (NAT) device. When you create an S3 VPC endpoint, you can attach an endpoint policy to it that controls access to Amazon S3.
You can use two types of VPC endpoints to access Amazon S3: gateway endpoints and interface endpoints. A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on-premises, or from a different AWS Region. Interface endpoints are compatible with gateway endpoints. If you have an existing gateway endpoint in the VPC, you can use both types of endpoints in the same VPC.
There is no additional charge for using gateway endpoints. However, standard charges for data transfer and resource usage still apply.
Hence, the correct answer is: Create an Amazon S3 gateway endpoint to enable a connection between the instances and Amazon S3.
There are a few, easily reproducible but confidential files that your client wants to store in AWS without worrying about storage capacity. For the first month, all of these files will be accessed frequently but after that, they will rarely be accessed at all. The old files will only be accessed by developers so there is no set retrieval time requirement. However, the files under a specific tdojo-finance prefix in the S3 bucket will be used for post-processing that requires millisecond retrieval time.
Given these conditions, which of the following options would be the most cost-effective solution for your client’s storage needs?
*
A) Store the files in S3 then after a month, change the storage class of the tdojo-finance prefix to S3-IA while the remaining go to Glacier using lifecycle policy.
B) Store the files in S3 then after a month, change the storage class of the tdojo-finance prefix to One Zone-IA while the remaining go to Glacier using lifecycle policy.
C) Store the files in S3 then after a month, change the storage class of the bucket to Intelligent-Tiering using lifecycle policy.
D) Store the files in S3 then after a month, change the storage class of the bucket to S3-IA using lifecycle policy.
C) Store the files in S3 then after a month, change the storage class of the bucket to Intelligent-Tiering using lifecycle policy.
The option that says: Storing the files in S3 then after a month, changing the storage class of the bucket to S3-IA using lifecycle policy is incorrect. Although it is valid to move the files to S3-IA, this solution still costs more compared with using a combination of S3-One Zone IA and Glacier.
The option that says: Storing the files in S3 then after a month, changing the storage class of the bucket to Intelligent-Tiering using lifecycle policy is incorrect. While S3 Intelligent-Tiering can automatically move data between two access tiers (frequent access and infrequent access) when access patterns change, it is more suitable for scenarios where you don’t know the access patterns of your data. It may take some time for S3 Intelligent-Tiering to analyze the access patterns before it moves the data to a cheaper storage class like S3-IA which means you may still end up paying more in the beginning. In addition, you already know the access patterns of the files which means you can directly change the storage class immediately and save cost right away.
The option that says: Storing the files in S3 then after a month, changing the storage class of the tdojo-finance prefix to S3-IA while the remaining go to Glacier using lifecycle policy is incorrect. Even though S3-IA costs less than the S3 Standard storage class, it is still more expensive than S3-One Zone IA. Remember that the files are easily reproducible so you can safely move the data to S3-One Zone IA and in case there is an outage, you can simply generate the missing data again.
A company has a requirement to move an 80 TB data warehouse to the cloud. It would take 2 months to transfer the data based on the current bandwidth allocation.
Which option is the most cost-effective for quick data upload to AWS?
*
A) AWS Snowball Edge
B) Amazon S3 Multipart Upload
C) AWS DataSync
D) AWS Direct Connect
A) AWS Snowball Edge
Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. The AWS Snowball Edge device differs from the standard Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality.
Hence, the correct answer is: AWS Snowball Edge.
A web application, which is hosted in the on-premises data center and uses a MySQL database, must be migrated to AWS Cloud. You need to ensure that the network traffic to and from your RDS database instance is encrypted using SSL. For improved security, you have to use the profile credentials specific to your EC2 instance to access your database, instead of a password.
Which of the following should you do to meet the above requirement?
*
A) Launch a new RDS database instance using Aurora with the Backtrack feature enabled.
B) Launch the mysql client using the –ssl-ca parameter when connecting to the database.
C) Configure your RDS database to enable encryption.
D) Set up an RDS database and enable the IAM DB Authentication.
D) Set up an RDS database and enable the IAM DB Authentication.
IAM database authentication provides the following benefits:
– Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).
– You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.
– For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security
Hence, setting up an RDS database and enable the IAM DB Authentication is the correct answer since IAM DB Authentication allows the use of the profile credentials specific to your EC2 instance to access your RDS database instead of a password.
A company runs an internal application on AWS which uses Amazon EC2 instances for compute and Amazon RDS for PostgreSQL for its data store. Considering the application only runs during working hours on weekdays, a solution is required to optimize costs with minimal operational overhead.
Which solution would satisfy these requirements?
*
A) Purchase reserved instance subscriptions for EC2 and RDS
B) Create an Amazon CloudWatch alarm that triggers an AWS Lambda function when CPU utilization falls below an idle threshold. In the function, implement logic for stopping both the EC2 instance and the RDS database.
C) Purchase a compute savings plan for EC2 and RDS.
D) Deploy the AWS CloudFormation template of the Instance Scheduler on AWS. Set up the start and stop schedules of the EC2 instance and RDS DB instance.
D) Deploy the AWS CloudFormation template of the Instance Scheduler on AWS. Set up the start and stop schedules of the EC2 instance and RDS DB instance.
AWS Instance Scheduler CloudFormation solution
The important aspect in this scenario is the usage pattern, which doesn’t fit the continuous usage model assumed by Reserved Instance subscriptions or compute savings plans. It’s essential to understand that the Instance Scheduler is not an AWS service or feature per se, but a CloudFormation template provided by AWS. By deploying this template, you can simply set the desired start and stop schedules for your EC2 and RDS instances to match your application’s operating hours.
Hence, the correct answer is: Deploy the AWS CloudFormation template of the Instance Scheduler on AWS. Set up the start and stop schedules of the EC2 instance and RDS DB instance.
A Solutions Architect designed a real-time data analytics system based on Kinesis Data Stream and Lambda. A week after the system has been deployed, the users noticed that it performed slowly as the data rate increases. The Architect identified that the performance of the Kinesis Data Streams is causing this problem.
Which of the following should the Architect do to improve performance?
*
A) Implement Step Scaling to the Kinesis Data Stream.
B) Improve the performance of the stream by decreasing the number of its shards using the MergeShard command.
C) Replace the data stream with Amazon Data Firehose instead.
D) Increase the number of shards of the Kinesis stream by using the UpdateShardCount command.
D) Increase the number of shards of the Kinesis stream by using the UpdateShardCount command.
Splitting increases the number of shards in your stream and therefore increases the data capacity of the stream. Because you are charged on a per-shard basis, splitting increases the cost of your stream. Similarly, merging reduces the number of shards in your stream and therefore decreases the data capacity—and cost—of the stream.
If your data rate increases, you can also increase the number of shards allocated to your stream to maintain the application performance. You can reshard your stream using the UpdateShardCount API. The throughput of an Amazon Kinesis data stream is designed to scale without limits via increasing the number of shards within a data stream.
Hence, the correct answer is: Increase the number of shards of the Kinesis stream by using the UpdateShardCount command.
A startup is using Amazon RDS to store data from a web application. Most of the time, the application has low user activity but it receives bursts of traffic within seconds whenever there is a new product announcement. The Solutions Architect needs to create a solution that will allow users around the globe to access the data using an API.
What should the Solutions Architect do meet the above requirement?
*
A) Create an API using Amazon API Gateway and use the Amazon ECS cluster with Service Auto Scaling to handle the bursts of traffic in seconds.
B) Create an API using Amazon API Gateway and use Amazon Elastic Beanstalk with Auto Scaling to handle the bursts of traffic in seconds.
C) Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of traffic in seconds.
D) Create an API using Amazon API Gateway and use an Auto Scaling group of Amazon EC2 instances to handle the bursts of traffic in seconds.
B) Create an API using Amazon API Gateway and use Amazon Elastic Beanstalk with Auto Scaling to handle the bursts of traffic in seconds.
C) Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of traffic in seconds.
The option that says: Create an API using Amazon API Gateway and use the Amazon ECS cluster with Service Auto Scaling to handle the bursts of traffic in seconds is incorrect. AWS Lambda is a better option than Amazon ECS since it can handle a sudden burst of traffic within seconds and not minutes.
The option that says: Create an API using Amazon API Gateway and use Amazon Elastic Beanstalk with Auto Scaling to handle the bursts of traffic in seconds is incorrect because just like the previous option, the use of Auto Scaling has a delay of a few minutes as it launches new EC2 instances that will be used by Amazon Elastic Beanstalk.
The option that says: Create an API using Amazon API Gateway and use an Auto Scaling group of Amazon EC2 instances to handle the bursts of traffic in seconds is incorrect because the processing time of Amazon EC2 Auto Scaling to provision new resources takes minutes. Take note that in the scenario, a burst of traffic within seconds is expected to happen.
A startup is planning to set up and govern a secure, compliant, multi-account AWS environment in preparation for its upcoming projects. The IT Manager requires the solution to have a dashboard for continuous detection of policy non-conformance and non-compliant resources across the enterprise, as well as to comply with the AWS multi-account strategy best practices.
Which of the following offers the easiest way to fulfill this task?
*
A) Use AWS Control Tower to launch a landing zone to automatically provision and configure new accounts through an
Account Factory. Utilize the AWS Control Tower dashboard to monitor provisioned accounts across your enterprise. Set up preventive and detective guardrails for policy enforcement.
B) Launch new AWS member accounts using the AWS CloudFormation StackSets. Use AWS Config to continuously track the configuration changes and set rules to monitor non-compliant resources. Set up a Multi-Account Multi-Region Data Aggregator to monitor compliance data for rules and accounts in an aggregated view
C) Use AWS Organizations to build a landing zone to automatically provision new AWS accounts. Utilize the AWS Personal Health Dashboard to see provisioned accounts across your enterprise. Enable preventive and detective guardrails enabled for policy enforcement.
D) Use AWS Service Catalog to launch new AWS member accounts. Configure AWS Service Catalog Launch Constraints to continuously track configuration changes and monitor non-compliant resources. Set up a Multi-Account Multi-Region Data Aggregator to monitor compliance data for rules and accounts in an aggregated view
A) Use AWS Control Tower to launch a landing zone to automatically provision and configure new accounts through an
Account Factory. Utilize the AWS Control Tower dashboard to monitor provisioned accounts across your enterprise. Set up preventive and detective guardrails for policy enforcement.
Use AWS Organizations to build a landing zone to automatically provision new AWS accounts. Utilize the AWS Personal Health Dashboard to see provisioned accounts across your enterprise. Enable preventive and detective guardrails enabled for policy enforcement is incorrect. The AWS Organizations service neither has the capability to build a landing zone nor a built-in dashboard for continuous detection of policy non-conformance and non-compliant resources across the enterprise. Moreover, the AWS Personal Health Dashboard simply provides alerts and remediation guidance when AWS is experiencing events that may impact your resources. This type of dashboard is not meant for monitoring the newly provisioned AWS accounts. The most suitable solution here is to use AWS Control Tower and its various features.
The option that says: Launch new AWS member accounts using the AWS CloudFormation StackSets. Use AWS Config to continuously track the configuration changes and set rules to monitor non-compliant resources. Set up a Multi-Account Multi-Region Data Aggregator to monitor compliance data for rules and accounts in an aggregated view is incorrect. Although the solution might work to monitor non-compliant resources, this is not the easiest way to fulfill the multi-account requirement in the scenario. It still takes a lot of time to configure CloudFormation StackSets templates and set up AWS Config for all your AWS member accounts.
The option that says: Use AWS Service Catalog to launch new AWS member accounts. Configure AWS Service Catalog Launch Constraints to continuously track configuration changes and monitor non-compliant resources. Set up a Multi-Account Multi-Region Data Aggregator to monitor compliance data for rules and accounts in an aggregated view is incorrect. AWS Service Catalog is not used to detect non-compliant resources and is only used to manage catalogs of IT services from virtual machine images, servers, software, databases, and other resources. This service is primarily used centrally to curate and share commonly deployed templates with their teams to achieve consistent governance and meet compliance requirements.
An organization needs to control access to several Amazon S3 buckets. The organization plans to use a gateway endpoint to allow access to trusted buckets.
Which of the following could help you achieve this requirement?
*
A) Generate an endpoint policy for trusted S3 buckets.
B) Generate an endpoint policy for trusted VPCs.
C) Generate a bucket policy for trusted S3 buckets.
D)Generate a bucket policy for trusted VPCs.
A) Generate an endpoint policy for trusted S3 buckets.
The option that says: Generate a bucket policy for trusted S3 buckets is incorrect. Although this is a valid solution, it takes a lot of time to set up a bucket policy for each and every S3 bucket. This can be simplified by whitelisting access to trusted S3 buckets in a single S3 endpoint policy.
The option that says: Generate a bucket policy for trusted VPCs is incorrect because you are generating a policy for trusted VPCs. Remember that the scenario only requires you to allow the traffic for trusted S3 buckets, not to the VPCs.
The option that says: Generate an endpoint policy for trusted VPCs is incorrect because it only allows access to trusted VPCs, and not to trusted Amazon S3 buckets.
An online shopping platform has been deployed to AWS using Elastic Beanstalk. They simply uploaded their Node.js application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Since the entire deployment process is automated, the DevOps team is not sure where to get the application log files of their shopping platform.
In Elastic Beanstalk, where does it store the application files and server log files?
*
A) Application files are stored in S3. The server log files can also optionally be stored in S3 or in CloudWatch Logs.
B) Application files are stored in S3. The server log files can be optionally stored in CloudTrail or in CloudWatch Logs.
C) Application files are stored in S3. The server log files can be stored directly in Glacier or in CloudWatch Logs.
D) Application files are stored in S3. The server log files can only be stored in the attached EBS volumes of the EC2 instances, which were launched by AWS Elastic Beanstalk.
A) Application files are stored in S3. The server log files can also optionally be stored in S3 or in CloudWatch Logs.
The option that says: Application files are stored in S3. The server log files can only be stored in the attached EBS volumes of the EC2 instances, which were launched by AWS Elastic Beanstalk is incorrect because the server log files can also be stored in either S3 or CloudWatch Logs, and not only on the EBS volumes of the EC2 instances which are launched by AWS Elastic Beanstalk.
The option that says: Application files are stored in S3. The server log files can be stored directly in Glacier or in CloudWatch Logs is incorrect because the server log files can optionally be stored in either S3 or CloudWatch Logs, but not directly to Glacier. You can create a lifecycle policy to the S3 bucket to store the server logs and archive it in Glacier, but there is no direct way of storing the server logs to Glacier using Elastic Beanstalk unless you do it programmatically.
The option that says: Application files are stored in S3. The server log files can be optionally stored in CloudTrail or in CloudWatch Logs is incorrect because the server log files can optionally be stored in either S3 or CloudWatch Logs, but not directly to CloudTrail as this service is primarily used for auditing API calls.
An application is hosted in an On-Demand EC2 instance and is using Amazon SDK to communicate to other AWS services such as S3, DynamoDB, and many others. As part of the upcoming IT audit, you need to ensure that all API calls to your AWS resources are logged and durably stored.
Which is the most suitable service that you should use to meet this requirement?
*
A) Amazon API Gateway
B) AWS X-Ray
C) AWS CloudTrail
D) Amazon CloudWatch
C) AWS CloudTrail
Amazon CloudWatch is incorrect because this is primarily used for systems monitoring based on the server metrics. It does not have the capability to track API calls to your AWS resources.
AWS X-Ray is incorrect because this is usually used to debug and analyze your microservices applications with request tracing so you can find the root cause of issues and performance. Unlike CloudTrail, it does not record the API calls that were made to your AWS resources.
Amazon API Gateway is incorrect because this is not used for logging each and every API call to your AWS resources. It is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.