april - may 2024 Flashcards

1
Q

A company has developed a multi-account strategy on AWS by using AWS control Tower. The company has provided individual AWS accounts to each of its developers. The company wants to implement controls to limit AWS resource costs that the developers incur. Which solution will meet these requirements with the LEAST operational overhead?

A. Instruct each developer to tag all their resources with a tag that has a key of CostCenter and a value of the developers name. Use the required tags AWS config managed rule to check for the tag. Create an AWS Lambda function to terminate resources that do not have the tag. Configure AWS Cost Explorer to send a daily report to each developer to monitor their spending.
B. Use AWS Budgets to establish budgets for each developer account. Set up budget alerts for actual and forecast values to notify developers when they exceed or expect to exceed their assigned budget. Use AWS Budgets actions to apply a DenyAll policy to the developer’s IAM role to prevent additional resources from being launched when the assigned budget is reached.
C. Use AWS Cost Explorer to monitor and report on costs for each developer account. Configure Cost Explorer to send a daily report to each developer to monitor their spending. Use AWS Cost Anomaly Detection to detect anomalous spending and provide alerts
D. Use AWS Service Catalog to allow developers to launch resources within a limited cost range. Create AWS Lambda functions in each AWS account to stop running resources at the end of each work day. Configure the Lambda functions to resume the resources at the start of each work day.

A

B. Use AWS Budgets to establish budgets for each developer account. Set up budget alerts for actual and forecast values to notify developers when they exceed or expect to exceed their assigned budget. Use AWS Budgets actions to apply a DenyAll policy to the developer’s IAM role to prevent additional resources from being launched when the assigned budget is reached.

  • C doesn’t enforce
  • others are not as cost effective
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A solutions architect is designing a three tier web app. The architecture consists of an internet facing application load balancer (ALB) and a web tier that is hosted on Amazon EC2 instances in private subnets. The application tier with the business logic runs on EC2 instanced in private subnets. The database tier consists of Microsoft SQL Server that runs on EC2 instances in private subnets. Security is a high priority for the company. Which combination of security group configurations should the solutions architect use? (CHOOSE 3)

A. Configure the security group for the web tier to allow inbound HTTPS traffic from the security group for the ALB
B. Configure the security group for the web tier to allow outbound HTTPS traffic to 0.0.0.0/0.
C. Configure the security group for the database tier to allow inbound Microsoft SQL Server traffic from the security group for the application tier.
D. Configure the security group for the database tier to allow outbound HTTPS traffic and Microsoft SQL Server traffic to the security group for the web tier
E. Configure the security group for the application tier to allow inbound HTTPS traffic from the security group for the web tier
F. Configure the security group for the application tier to allow outbound HTTPS traffic and Microsoft SQL Server traffic to the security group for the web tier

A

A. Configure the security group for the web tier to allow inbound HTTPS traffic from the security group for the ALB
C. Configure the security group for the database tier to allow inbound Microsoft SQL Server traffic from the security group for the application tier.
E. Configure the security group for the application tier to allow inbound HTTPS traffic from the security group for the web tier

  • the application tier shouldn’t need to connect to the web tier for this (eliminates D/F)
  • do not need anything with an IP address (B)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company has released a new version of its production application. The company’s workload uses Amazon EC2, AWS Lambda, AWS Fargate, and Amazon SageMaker. The company wants to cost optimize the workload now that usage is at a steady state, The company wants to cover the most services with the fewest savings plans. Which combination of savings plans will meet these requirements? (Choose 2)

A. Purchase an EC2 instance Savings Plan for Amazon EC2 and SageMaker
B. Purchase a compute savings plan for Amazon EC2, Lambda, and SageMaker
C. Purchase a SageMaker savings plan
D. Purchase a compute savings plan for Lambda, Fargate, and Amazon EC2
E. Purchase an EC2 instance savings plan for Amazon EC2 and Fargate

A

C. Purchase a SageMaker savings plan
D. Purchase a compute savings plan for Lambda, Fargate, and Amazon EC2

  • EC2 savings plans do not help with serverless instances (A/E)
  • Compute services do NOT cover SageMaker (B)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company uses a Microsoft SQL Server database. The companys apps are connected to the database. The company wants to migrate to an Amazon Aurora PostgreSQL database with minimal changes to the application code. Which combination of steps will meet these requirements? (Choose 2)

A. Use the AWS Schema Conversion Tool (AWS SCT) to rewrite the SQL queries in the apps
B. Enable Babelfish on Aurora PostgreSQL to run the SQL queries from the apps
C. Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS)
D. Use Amazon RDS Proxy to connect the apps to Aurora PostgreSQL
E. Use AWS Database Migration Service (AWS DMS) to rewrite the SQL queries in the apps

A

B. Enable Babelfish on Aurora PostgreSQL to run the SQL queries from the apps
C. Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS)

  • Babelfish helps translate script language automatically
  • AWS SCT is still a manual conversion and prone to errors
  • RDS Proxy = connection management. This does nothing for SQL compatibility
  • DMS is data migration not SQL conversion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company plans to rehost an app to Amazon EC2 instances that use Amazon Elastic Block Store (Amazon EBS) as the attached storage. A solutions architect must design a solution to ensure that all newly created Amazon EBS volumes are encrypted by default. The solution must also prevent the creation of unencrypted EBS volumes. Which solution will meet these requirements?

A. Configure the EC2 account attributes to always encrypt new EBS volumes.
B. Use AWS Config. Configure the encrypted-volumes identifier. Apply the default AWS Key Management Service (AWS KMS) key.
C. Configure AWS Systems Manager to create encrypted copies of the EBS volumes. Reconfigure the EC2 instances to use the encrypted volumes.
D. Create a customer managed key in AWS Key Management Service (AWS KMS). Configure AWS Migration Hub to use the key when the company migrates workloads.

A

B. Use AWS Config. Configure the encrypted-volumes identifier. Apply the default AWS Key Management Service (AWS KMS) key.

  • AWS Config allows you to make encryption rules
  • A doesn’t prevent creation of unencrypted volumes
  • C is extra work by making copies of unencrypted volumes
  • D does no encryption
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

An ecommerce company wants to collect user clickstream data from the companys website for real-time analysis. The website experiences fluctuating traffic patterns throughout the dat. The company needs a scalable solution that can adapt to varying levels of traffic. Which solution will meet these requirements?

A. Use a data stream in Amazon Kinesis Data Streams in on-demand mode to capture the clickstream data. Use AWS Lambda to process the data in real time
B. Use Amazon Kinesis Data Firehose to capture the clickstream data. Use AWS Glue to process the data in real time
C. Use Amazon kinesis Video Streams to capture the clickstream data. Use AWS Glue to process the data in real time
D. Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to capture the clickstream data, Use AWS Lambda to process the data in real time.

A

A. Use a data stream in Amazon Kinesis Data Streams in on-demand mode to capture the clickstream data. Use AWS Lambda to process the data in real time

-Kinesis Data Streams =clickstream events data management. On-demand is more cost effective
- Firehose delivers data to destinations like S3 for batch analysis and Glue is not used for real time processing
- Kinesis Video Streams is for video data, not clickstream events
- Flink is complex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A global company runs its workloads on AWS. The companys app uses Amazon S3 buckets across AWS Regions for sensitive data storage and analysis. The company stores millions of objects in multiple S3 buckets daily. The company wants to identify all S3 buckets that are not versioning-enabled. Which solution will meet these requirements?

A. Set up and AWS CloudTrail event that has a rule to identify all S3 buckets that are not versioning-enabled across Regions
B. Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions
C. Enable IAM Access Analyzer for S3 to identify all S3 buckets that are not versioning-enabled across Regions
D. Create an S3 Multi-Region Access Point(MRAP) to identify all S3 buckets that are not versioning-enabled across Regions

A

B. Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions

  • CloudTrail does log versioning status but requires manual extraction
  • Access Analyzer focuses on permission analysis not bucket versioning
  • MRAP is for data access and replication, not buckets
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company needs to optimize its Amazon S3 storage costs for an app that generates many files that cannot be recreated. Each file is approximately 5 MB and is stored in Amazon S3 Standard storage. The company must store the files for 4 years before the files can be deleted. The files must be immediately accessible. The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed after the first 30 days. Which solution will meet these requirements MOST cost-effectively?

A. Create an S3 lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation
B. Create an S3 lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days after object creation. Delete the files 4 years after object creation
C. Create an S3 lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation. Delete the files 4 years after object creation
D. Create an S3 lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation

A

A. Create an S3 lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object creation. Delete the files 4 years after object creation

  • One-Zone holds all the files and if this is lost it can’t be created so B is not the answer
  • Use standard for immediate/frequent access
  • no pattern = intelligence tier
  • rarely accessed=glacier
  • infrequent access = standard infrequent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company runs its critical storage app in the AWS cloud. The app uses amazon S3 in two AWS regions. The company wants the app to send remote user data to the nearest S3 bucket with no public network congestion. The company also wants the app to fail over with the least amount of management of Amazon S3. Which solution will meet these requirements?

A. Implement an active-active design between the two regions. Configure the app to use the regional S3 endpoints closest to the user.
B. Use an active-passive configuration with S3 Multi-Region Access Pints. Create a global endpoint for each of the Regions.
C. Send user data to the regional S3 endpoints closest to the user. Configure an S3 cross-account replication rule to keep the S3 buckets synchronized.
D. Set up Amazon S3 to use Multi-Region Access Points in an active-active configuration with a single global endpoint. Configure S3 Cros-Region Replication

A

D. Set up Amazon S3 to use Multi-Region Access Points in an active-active configuration with a single global endpoint. Configure S3 Cros-Region Replication

  • using config based on the relative location to the user is manual and inefficient (A)
  • active-passive set ups require manual intervention in case of fail over (B)
  • lacks intelligent routing and fail over access points (C)
  • global endpoints intelligently route users to the nearest S3 bucket with minimal network latency (D)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company is migrating a data center from its on prem location to AWS. The company has several legacy applications that are hosted on individual virtual servers. Changes to the app designs cannot be made. Each individual virtual server currently runs as its own EC2 instance. A solutions architect needs to ensure that the apps are reliable and fault tolerant after migration to AWS. The apps will run on Amazon EC2 instances. Which solution will meet these requirements?

A. Create an Auto Scaling group that has a minimum of one and a maximum of one. Create an Amazon Machine Image (AMI) of each app instance. Use the AMI to create EC2 instances in the Auto Scaling group Configure an Application Load Balancer in front of the Auto Scaling group.
B. Use AWS Backup to create an hourly backup of the EC2 instance that hosts each app. Store the backup in Amazon S3 in a separate Availability Zone. Configure a disaster recovery process to restore the EC2 instance for each app from its most recent backup.
C. Create an Amazon Machine Image (AMI) of each app instance. Launch two new EC2 instances from the AMI. Place each EC2 instance in a separate Availability Zone. Configure a Network Load Balancer that has the EC2 instances as targets
D. Use AWS Migration Hub Refactor Spaces to migrate each app off the EC2 instance. Break down functionality from each app into individual components. Host each app on Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type.

A

C. Create an Amazon Machine Image (AMI) of each app instance. Launch two new EC2 instances from the AMI. Place each EC2 instance in a separate Availability Zone. Configure a Network Load Balancer that has the EC2 instances as targets

  • auto scaling with min/max set to ONE is invalid bc it will be too restricted and won’t be able to launch a new instance if the first instance fails
  • Backup doesn’t prevent downtime, but its used for recovery AFTER failure
  • containerization and fargate require customization of the app to use and we want minimal changes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company wants to isolate its workloads by creating an AWS account for each workload. The company needs a solution that centrally manages networking components for the workloads. The solution also must create accounts with automatic security controls (guardrails). Which solution will meet these requirements with the LEAST operational overhead?

A. Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts
B. Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
C. Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
D. Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.

A

B. Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.

  • Control Tower is broader than organization(A/C)
  • Creating a VPC in EACH WORKLOAD increases work and complexity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website serves static content. Website traffic is increasing. The company wants to minimize the website hosting costs. Which solution will meet these requirements?

A. Move the website to an Amazon S3 bucket. Configure an Amazon CloudFront distribution for the S3 bucket
B. Move the website to an Amazon S3 bucket. Configure an Amazon ElastiCache cluster for the S3 bucket
C. Move the website to AWS Amplify. Configure an ALB to resolve to the Amplify website
D. Move the website to AWS Amplify. Configure EC2 instances to cache the website

A

A. Move the website to an Amazon S3 bucket. Configure an Amazon CloudFront distribution for the S3 bucket

-Static Content = S3 (C/D)
- ElastiCache is more variable and to meet elastic demands and adds complexity/costs compared to CloudFront which is more cookie cutter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company is implementing a shared storage solution for a media application that the company hosts on AWS. The company needs the ability to use SMB clients to access stored data. Which solution will meet these requirements with the LEAST admin overhead?

A. Create an AWS Storage Gateway Volume Gateway. Create a file share that uses the required client protocol. Connect the app server to the file share
B. Create an AWS Storage Gateway Tape Gateway. Configure tapes to use Amazon S3. Connect the app server to the Tape Gateway.
C. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the app server to the file share
D. Create an Amazon FSx for Windows File Server file system. Connect the app server to the file system

A

D. Create an Amazon FSx for Windows File Server file system. Connect the app server to the file system

  • Volume/Tape Gateway does NOT support SMB. They are more used for backup (a/b)
  • EC2 is self managed and manual (C)
  • FSx is a AWS managed service
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company is designing its production applications disaster recovery (DR) strategy. The app is backed by a MySQL database on an Amazon Aurora cluster in the us-east-1 Region. The company has chosen the us-west-1 Region as its DR Region. The company’s target recovery point objective (RPO) is 5 mins and the target recovery time objective (RTO) is 20 mins. The company wants to minimize config changes. Which solution will meet these requirements with the MOST operational efficiency?

A. Create an Aurora read replica in us-west-1 similar in size to the production applications Aurora MySQL cluster writer instance.
B. Convert the Aurora cluster to an Aurora global database. Configure managed failover
C. Create a new Aurora cluster in us-west-1 that has Cross0Region Replication
D. Create a new Aurora cluster in us-west-1. Use AWS Database Migration Service (AWS DMS) to sync both clusters

A

B. Convert the Aurora cluster to an Aurora global database. Configure managed failover

  • Read replicas don’t help with fast failover (A)
  • Aurora cluster has higher RPO than Aurora global database and requires more complex set up (C)
  • DMS is for more database migration (D)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company runs a critical data analysis job each week before the first day of the work week. The job requires at least 1 hour to complete the analysis. The job is stateful and cannot tolerate interruptions. The company needs a solution to run the job on AWS. Which solution will meet these requirements?

A. Create a container for the job. Schedule the job to run as an AWS Fargate task on an Amazon Elastic Container Service (Amazon ECS) cluster by using Amazon EventBridge Scheduler
B. Configure the job to run in an AWS Lambda function. Create a scheduled rule in Amazon EventBridge to invoke the Lambda function
C. Configure an Auto Scaling group of Amazon EC2 Spot Instances that run Amazon Linux. Configure a crontab entry on the instances to run the analysis
D. Configure an AWS DataSync task to run the job. Configure a cron expression to run the task on a schedule

A

A. Create a container for the job. Schedule the job to run as an AWS Fargate task on an Amazon Elastic Container Service (Amazon ECS) cluster by using Amazon EventBridge Scheduler

  • “1 hour” = remove Lambda (B)
  • “stateful and cannot tolerate interruptions” = remove spot instances (C)
  • DataSync is a data transfer service not used for analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company runs workloads in the AWS Cloud. The company wants to centrally collect security data to assess security across the entire company and to improve workload protection. Which solution will meet these requirements with the LEAST development effort?

A. Configure a data lake in AWS Lake Formation. Use AWS Glue crawlers to ingest the security data into the data lake.
B. Configure an AWS Lambda function to collect the security data in .csv format. Upload the data to an Amazon S3 bucket.
C. Configure a data lake in Amazon Security Lake to collect the security data. Upload the data to an Amazon S3 bucket.
D. Configure an AWS Database Migration Service (AWS DMS) replication instance to load the security data into an Amazon RDS cluster

A

C. Configure a data lake in Amazon Security Lake to collect the security data. Upload the data to an Amazon S3 bucket.

  • Data lakes take too much config setting up glue and crawlers (A)
  • overly manual and focuses on csv instead of centralization and scalability (B)
  • DMS is for database migration and lacks security (D)
17
Q

A company is migrating 5 on prem apps to VPC’s in the AWS Cloud. Each app is currently deployed in isolated virtual networks on prem and should be deployed similarly in the AWS cloud. The apps need to reach a shared services VPC. All the apps must be able to communicate with each other. If the migration is successful, the company will repeat the migration process for more than 100 apps. Which solution will meet these requirements with the LEAST admin overhead?

A. Deploy software VPN tunnels between the application VPCs and the shared services VPC. Add routes between the application VPCs in their subnets to the shared services VPC
B> Deploy VPC peering connections between the application VPCs and the shared services VPC. Add routes between the application VPCs in their subnets to the shared services VPC through the peering connection
C. Deploy an AWS Direct Connect connection between the application VPCs and the shared services VPAdd routes from the application VPCs in their subnets to the shared services VPC and the application VPCs. Add routes from the shared services VPC subnets to the applications VPCs
D. Deploy a transit gateway with associations between the transit gateway and the application VPCs and the shared services VPC. Add routes between the application VPCs in their subnets and the application VPCs to the shared services VPC through the transit gateway

A

D. Deploy a transit gateway with associations between the transit gateway and the application VPCs and the shared services VPC. Add routes between the application VPCs in their subnets and the application VPCs to the shared services VPC through the transit gateway

  • multiple tunnels in a VPN is a lot of manual work
  • Transit gateway is what is needed to connect 3 or more VPCs together. VPC peering is used for connecting only 2 VPCs, if they connect more, it creates a mesh and is admin heavy
  • Direct connect is more used for hybrid platforms, not on prem
17
Q

A company wants to use Amazon Elastic Container Service (Amazon ECS) to run its on-prem app in a hybrid environment. The app currently runs on containers on prem. The company needs a single container solution that can scale in an on-prem, hybrid, or cloud environment. The company must run new app containers in the AWS Cloud and must use a load balancer for HTTP traffic. Which combination of actions will meet these requirements (Choose 2)

A. Set up an ECS cluster that uses the AWS Fargate launch type for the cloud application containers. Use an Amazon ECS Anywhere external launch type for the on prem app containers
B. Set up an Application Load Balancer for cloud ECS services
C. Set up a Network Load Balancer for cloud ECS services
D. Set up an ECS cluster that uses the AWS Fargate launch type. Use Fargate for the cloud application containers and the on prem app containers
E. Set up an ECS cluster that uses the Amazon EC2 launch type for the cloud application containers. Use Amazon ECS Anywhere with an AWS Fargate launch type for the on prem app containers

A

A. Set up an ECS cluster that uses the AWS Fargate launch type for the cloud application containers. Use an Amazon ECS Anywhere external launch type for the on prem app containers
B. Set up an Application Load Balancer for cloud ECS services

  • Network Load Balancer operate at DCP level 4 (C) but ALB uses level 7 (B)
  • Fargate doesn’t work on prem unless using ECS Anywhere (A/D)
18
Q

A company is migrating its workloads to AWS. The company has sensitive and critical data in on prem relational databases that run on SQL Server instances. The company wants to use the AWS Cloud to increase security and reduce operational overhead for the databases. Which solution will meet these requirements?

A. Migrate the databases to Amazon EC2 instances. Use an AWS Key Management Service (AWS KMS) AWS managed key for encryption
B. Migrate the databases to a Multi-AZ Amazon RDS for SQL Server DB instance. Use an AWS Key Management Service (AWS KMS) AWS managed key for encryption
C. Migrate the data to an Amazon S3 bucket. Use Amazon Macie to ensure data security
D. Migrate the databases to an Amazon DynamoDB table. Use Amazon CloudWatch Logs to ensure data security

A

B. Migrate the databases to a Multi-AZ Amazon RDS for SQL Server DB instance. Use an AWS Key Management Service (AWS KMS) AWS managed key for encryption

  • Multi-AZ automatically replicates data across availability zones
  • EC2 is self managed and is admin overhead
  • S3 is an object store not for relational data
  • DynamoDB is NoSQL database and fundamentally different than relational databases structure
19
Q

A company wants to migrate an app to AWS. The company wants to increase the apps current availability. The company wants to use AWS WAF in the apps architecture. Which solution will meet these requirements?

A. Create an Auto SCaling group that contains multiple Amazon EC2 instances that host the application across 2 availability zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as the target. Connect a WAF to the ALB.
B. Create a cluster placement group that contains multiple Amazon EC2 instances that hosts the app. Configure an Application Load Balancer (ALB) and set the EC2 instances as the targets. Connect a WAF to the placement group
C. Create two Amazon EC2 instances that host the app across two Availability Zones. Configure the EC2 instances as the targets of an Application Load Balancer (ALB). Connect a WAF to the ALB
D. Create an Auto Scaling group that contains multiple Amazon EC2 instances that host the app across two Availability Zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as the target. Connect a WAF to the Auto Scaling group

A

A. Create an Auto SCaling group that contains multiple Amazon EC2 instances that host the application across 2 availability zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as the target. Connect a WAF to the ALB.

  • WAF and ALB work seamlessly together (B/D)
20
Q

A company manages a data lake in an Amazon S3 bucket that numerous apps access. The S3 bucket contains a unique prefix for each app. The company wants to restrict each app to its specific prefix and to have granular control of the objects under each prefix. Which solution will meet these requirements with the LEAST operational overhead?

A. Create dedicated S3 access points and access point policies for each app
B. Create an S3 batch Operations job to set the ACL permissions for each object in the S3 bucket
C. Replicate the objects in the S3 bucket to new S3 buckets for each app. Create replication rules by prefix
D. Replicate the objects in the S3 bucket to new S3 buckets for each app. Create dedicated S3 access points for each app

A

A. Create dedicated S3 access points and access point policies for each application

  • setting policies allows for minimal overhead and the access points help allow for this
21
Q

A company has an app that customers use to upload images to an Amazon S3 bucket. Each night, the company launches an Amazon EC2 Spot Fleet that processes all the images that the company received that day. The processing for each image takes 2 mins and requires 512MB of memory. A solutions architect needs to change the app to process the images when the images are uploaded. Which change will meet these requirements *MOST cost effectively? *

A. Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an AWS Lambda function to read the messages from the queue and to process the images
B. A. Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an EC2 reserved instance to read the messages from the queue and to process the images
C. A. Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification Service (Amazon SNS) topic. Configure a container instance in Amazon Elastic Container Service (Amazon ECS) to subscribe to the topic and to process the images.
D. A. Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification Service (Amazon SNS) topic. Configure an AWS Elastic Beanstalk application to subscribe to the topic and to process the images

A

A. Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service (Amazon SQS) queue. Configure an AWS Lambda function to read the messages from the queue and to process the images

  • Duration and memory requirements = Lambda