ST's Question Bank Flashcards

1
Q

What prevents groups of EC2 nodes from sharing the same underlying hardware?

A

Dedicated tenancy

Ensures hw dedicate to a single customer

esp. important for workloads that requires high levels of security or compliance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What allows you to reserve capacity ensuring that you have the capacity available when needed especially useful for DR situations where it’s in another AZ/Region?

A

Capacity reservation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what Acts as a managed service to create, publish, and secure APIs at scale. Allows the creation of API endpoints that can be integrated with other web applications?

A

Amazon API Gateway:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is Used to capture and upload streaming data to other AWS services. In this case, you can store the information in an Amazon S3 bucket.

A

Amazon Kinesis Data Firehose

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what Provides a way to control access to your APIs using Lambda functions. Allows you to implement custom authorization logic. This solution offers scalability, the ability to handle unpredictable surges in activity, and integration capabilities. Using a Lambda API Gateway authorizer ensures that the authorization step is performed securely.

A

API Gateway Lambda Authorizer:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is a common and effective solution for maintaining user session state in a web application, providing high availability and preventing loss of session state during web server outages?

A

Amazon ElastiCache for Redis

Amazon ElastiCache for Redis: Redis is an in-memory data store that can be used to store session data. It offers high availability and persistence options, making it suitable for maintaining session state. Sticky sessions and auto-scaling group: Using ElastiCache for Redis enables centralized storage of session state, ensuring that sticky sessions can still be maintained even if an EC2 instance is unavailable or replaced due to scaling automatic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is an in-memory data store that can be used to store session data. It offers high availability and persistence options, making it suitable for maintaining session state.

A

Amazon ElastiCache for Redis: Redis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What should you use to enable centralized storage of session state, ensuring that sticky sessions can still be maintained even if an EC2 instance is unavailable or replaced due to scaling automatic. It is commonly used for Sticky sessions and with auto-scaling groups.

A

Use ElastiCache for Redis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Compare how Application Load Balancer and API Gateway charges differ?

A

You are charged for each hour or partial hour that an application load balancer is running, and the number of load balancer capacity units (LCUs) used per hour. With Amazon API Gateway, you only pay when your APIs are in use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the advantages of using CloudFront?

A

CloudFront for content delivery: CloudFront is used as a content delivery network (CDN) to distribute images globally. This reduces latency and ensures fast access for customers around the world.

Geo Restrictions in CloudFront: CloudFront supports geo restrictions, allowing the company to deny access to users from specific countries. This satisfies the requirement of controlling access based on the user’s location.

CloudFront is a cost-effective solution for content delivery, and can significantly reduce data transfer costs by serving content from edge locations close to end users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What ElastiCache feature supports allowing the creation of replication groups that span multiple availability zones (AZs) within a region. This guarantees high availability at a regional level.

A

Multi-AZ Redis Replication Groups

There are a number of instances where ElastiCache for Redis may need to replace a primary node; these include certain types of planned maintenance and the unlikely event of a primary node or Availability Zone failure. If Multi-AZ is enabled, the downtime is minimized. The role of primary node will automatically fail over to one of the read replicas. There is no need to create and provision a new primary node, because ElastiCache will handle this transparently. This failover and replica promotion ensure that you can resume writing to the new primary as soon as promotion is complete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What feature within an ElastiCache set up allows replication groups to contain multiple nodes, providing scalability and redundancy at the node level? This contributes to high availability and performance.

A

Shards with Multi-node

A shard (API/CLI: node group) is a collection of one to six Redis nodes. A Redis (cluster mode disabled) cluster will never have more than one shard. With shards, you can separate large databases into smaller, faster, and more easily managed parts called data shards. This can increase database efficiency by distributing operations across multiple separate sections. Using shards can offer many benefits including improved performance, scalability, and cost efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What feature allows EC2 instances to persist their in-memory state to Amazon EBS? When active, an instance can quickly resume with its previous memory state intact.

A

EC2 On-Demand Instances with Hibernation: Hibernation allows EC2 instances to persist their in-memory state to Amazon EBS. When an instance is hibernated, it can quickly resume with its previous memory state intact. This is particularly useful for reducing startup time and loading memory quickly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What EC2 feature allows you to keep a specific number of instances running even when demand is low?

A

EC2 Auto Scaling Warm Pools

Warm pools keep instances in a state where they can respond quickly to increased demand. This helps reduce the time it takes for an instance to become fully productive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What feature in auto scaling adjusts the size of the Auto Scaling group in response to changing demand that is unpredictable or sudden?

A

Dynamic Scaling: Dynamic scaling

Allows the Auto Scaling group to automatically increase or decrease the number of instances based on defined policies. This is well suited for handling surges in traffic as the group enters or exits as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What auto scaling feature uses machine learning algorithms to predict future demand and adjust pool size accordingly, it could be overkill for sudden, unpredictable spikes in traffic?

A

Predictive Scaling: While predictive scaling uses machine learning algorithms to predict future demand and adjust pool size accordingly, it could be overkill for sudden, unpredictable spikes in traffic. Dynamic scaling can respond quickly without the need for extensive predictive analysis.

Predictive scaling works by analyzing historical load data to detect daily or weekly patterns in traffic flows. It uses this information to forecast future capacity needs so Amazon EC2 Auto Scaling can proactively increase the capacity of your Auto Scaling group to match the anticipated load.

Predictive scaling is well suited for situations where you have:

Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using predictive scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature. Predictive scaling can also potentially save you money on your EC2 bill by helping you avoid the need to over provision capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is Redshift used for?

A

Data warehouses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the scheduled scaling feature in Lambda used for?

A

Scheduled scaling for provisioned concurrency: ensures that a specified number of function instances are available and hot to handle requests. By configuring scheduled scaling to increase provisioned concurrency ahead of anticipated maximum usage each day, you ensure that there are enough warm instances to handle incoming requests, reducing cold starts and latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is Athena used for?

A

Amazon Athena to Query Data in Amazon S3: Amazon Athena is a serverless query service that allows analysts to run SQL queries directly on data stored in Amazon S3. It’s cost-effective because you charge per query, and there’s no need to provision or manage infrastructure.

Used to build interactive, advance analytics application using data stored in cloud stores (e.g., S3), data lakes, or on-premise. Athena provides a simplified, flexible way to analyze petabytes of data where it lives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines?

A

AWS Step Functions

With Step Functions, you can orchestrate large-scale parallel workloads to perform tasks, such as on-demand processing of semi-structured data. These parallel workloads let you concurrently process large-scale data sources stored in Amazon S3.

Use cases:
* Automate extract, transform, and load (ETL) processes:
Ensure that multiple long-running ETL jobs run in order and complete successfully, without the need for manual orchestration.

  • Orchestrate large-scale parallel workloads:
    Iterate over and process large data-sets such as security logs, transaction data, or image and video files.
  • Orchestrate microservices:
    Combine multiple AWS Lambda functions into responsive serverless applications and microservices.
  • Automate security and IT functions
    Create automated workflows, including manual approval steps, for security incident response.
    Learn more about creating a security incident response

NOTE: useful for large-scale parallel on-demand processing of a semistructured dataset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the map state in distributed mode within step functions used for?

A

To set up a large-scale parallel workload in your workflows, include a Map state in Distributed mode. The Map state processes items in a dataset concurrently. A Map state set to Distributed is known as a Distributed Map state. In Distributed mode, the Map state allows high-concurrency processing. In Distributed mode, the Map state processes the items in the dataset in iterations called child workflow executions.

Distributed mode
A processing mode of the Map state. In this mode, each iteration of the Map state runs as a child workflow execution that enables high concurrency. Each child workflow execution has its own execution history, which is separate from the parent workflow’s execution history. This mode supports reading input from large-scale Amazon S3 data sources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the map state in inline (the default) mode within step functions used for?

A

By default, Map states runs in Inline mode. In Inline mode, the Map state accepts only a JSON array as input. It receives this array from a previous step in the workflow. In this mode, each iteration of the Map state runs in the context of the workflow that contains the Map state. Step Functions adds the execution history of these iterations to the parent workflow’s execution history.

A Map state set to Inline is known as an Inline Map state. Use the Map state in Inline mode if your workflow’s execution history won’t exceed 25,000 entries, or if you don’t require more than 40 concurrent iterations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a hybrid cloud storage service that provides seamless, secure integration between on-premises IT environments and AWS storage services?

A

AWS Storage Gateway: AWS Storage Gateway is a hybrid cloud storage service that provides seamless, secure integration between on-premises IT environments and AWS storage services. Supports different gateway configurations, including volume gateways.

Volume Gateway Types:
* Stored Volumes: Entire data sets are stored on-premises, and the entire data set is backed up to Amazon S3.
* Cached volumes: Only frequently accessed data is stored on-premises, while the entire data set is backed up to Amazon S3.

Low latency access with cached volumes: Cached volumes provide low latency access to frequently used data because frequently accessed data is stored locally on premises. The entire dataset is backed up on Amazon S3, ensuring durability and accessibility.

Using a cached volume gateway minimizes the need for significant changes to existing infrastructure. It allows the company to keep frequently accessed data on-premises while taking advantage of the scalability and durability of Amazon S3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the differences between AWS Storage Gateway volume gateway with stored volumes and AWS Storage Gateway volume gateway with cached volumes?

A

Volume Gateway Types:
* Stored Volumes: Entire data sets are stored on-premises, and the entire data set is backed up to Amazon S3.
* Cached volumes: Only frequently accessed data is stored on-premises, while the entire data set is backed up to Amazon S3.

Low latency access with cached volumes: Cached volumes provide low latency access to frequently used data because frequently accessed data is stored locally on premises. The entire dataset is backed up on Amazon S3, ensuring durability and accessibility.

Using a cached volume gateway minimizes the need for significant changes to existing infrastructure. It allows the company to keep frequently accessed data on-premises while taking advantage of the scalability and durability of Amazon S3.

AWS Storage Gateway volume gateway with stored volumes: Stored volumes keep the entire data set on-premises, and may not be best suited for low-latency access to frequently used data. Therefore, option D, which uses an AWS Storage Gateway volume gateway with cached volumes, is the most appropriate option for the given requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

T/F: it’s a best practice to deploy AWS Firewall Manager to manage ALB.

A

F

AWS Firewall Manager is best suited for managing security policies at an organizational level rather than specific to individual applications. While AWS Firewall Manager can manage WAF policies, using WAF directly with ALB is a simpler and more common approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

T/F: When it comes to granular access control at the data level (row level or cell level), IAM roles are sufficient enough to provide the needed security control.

A

F

IAM roles are typically used for authentication and authorization at the AWS service level. However, when it comes to granular access control at the data level (row level or cell level), IAM roles alone might not be enough.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is used in Lake Formation to implement column-level, row-level, and cell-level security?

A

Data Filters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the recommended design for handling near real-time streaming data at scale? The aim is to Provide the scalability and resilience needed to process large volumes of data.

A

Amazon Kinesis Data Streams, for ingesting data and processing it with AWS Lambda functions

The keyword is “real time”. Kinesis data streams are meant for real time data processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

T or F: Amazon EKS offers the option to encrypt Kubernetes secrets at rest using AWS Key Management Service (AWS KMS). This is a native and managed solution within the EKS service, reducing operational overhead. Kubernetes secrets are automatically encrypted using the default AWS KMS key for the EKS cluster. However, to ensure that sensitive information stored in Kubernetes secrets is encrypted, you will need a third party encryption feature.

A

F
Amazon EKS offers the option to encrypt Kubernetes secrets at rest using AWS Key Management Service (AWS KMS). This is a native and managed solution within the EKS service, reducing operational overhead. Kubernetes secrets are automatically encrypted using the default AWS KMS key for the EKS cluster. This ensures that sensitive information stored in Kubernetes secrets is encrypted, providing security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

_______________ provides a comprehensive solution for monitoring and analyzing containerized applications, including those running on Amazon Elastic Kubernetes Service (Amazon EKS). It collects performance metrics, logs, and events from EKS clusters and containerized applications, allowing you to gain insight into their performance and health.

A

Amazon CloudWatch Container Insights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

_______________ is a threat detection service that continuously monitors malicious activity and unauthorized behavior in AWS accounts. It analyzes VPC flow logs, AWS CloudTrail event logs, and DNS logs for potential threats. Its findings can be sent to AWS Security Hub, which acts as a central hub for monitoring security alerts and compliance status across all AWS accounts.

A

Amazon GuardDuty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

__________ consolidates and prioritizes findings from multiple AWS services, including GuardDuty, and provides a unified view of security alerts. It integrates with third-party security tools and allows the creation of custom actions to remediate security findings. This solution provides continuous monitoring, detection, and reporting of malicious activities in your AWS account, including S3 bucket access patterns.

A

AWS Security Hub

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

________ is a service that focuses on discovering, classifying, and protecting sensitive data.

A

Amazon Macie

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

__________ is a data transfer service that simplifies and accelerates data migration between on-premises storage systems and AWS. By installing this service’s agent in your on-premises data center, you can use its tasks to efficiently transfer data to Amazon EFS. This approach helps minimize downtime and ensure a smooth migration.

A

AWS DataSync

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

_____________ is an optional feature of a backup vault, which can be helpful in giving you additional security and control over your backup vaults. When it is active in Compliance mode and the grace time is over, the vault configuration cannot be altered or deleted by a customer, account/data owner, or AWS. Each vault can have one in place.

A

AWS Backup Vault Lock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the difference between AWS Backup Vault Lock compliance and governance modes?

A

Governance mode in AWS Backup Vault Lock provides control over who can change or delete recovery points, but may not be as strict as enforcement mode. Vaults locked in governance mode can have the lock removed by users with sufficient IAM permissions. Governance mode is intended to allow a vault to be managed only by users with sufficient IAM privileges. Governance mode helps an organization meet governance requirements, ensuring only designated personnel can make changes to a backup vault.

Using AWS Backup Vault Lock in compliance mode ensures that data is retained for the specified duration and cannot be deleted by any user. Vaults locked in compliance mode cannot be deleted once the cooling-off period (“grace time”) expires. During grace time, you can still remove the vault lock and change the lock configuration. Compliance mode is intended for backup vaults in which the vault (and by extension, its contents) is expected to never be deleted or altered until the data retention period is complete. Once a vault in compliance mode is locked, it is immutable, meaning the lock cannot be removed.

37
Q

__________ are applied at the root level of an AWS organization to set fine-grained permissions for all accounts in the organization.

A

Service control policies (SCPs)

38
Q

One option in S3 to automatically replicate new objects as they are written to the bucket, is to use the live replication feature called __________. It is used to copy objects across Amazon S3 buckets in different AWS Regions.

A

Cross-Region Replication (CRR)

CRR can help you do the following:

  • Meet compliance requirements – Although Amazon S3 stores your data across multiple geographically distant Availability Zones by default, compliance requirements might dictate that you store data at even greater distances. To satisfy these requirements, use Cross-Region Replication to replicate data between distant AWS Regions.
  • Minimize latency – If your customers are in two geographic locations, you can minimize latency in accessing objects by maintaining object copies in AWS Regions that are geographically closer to your users.
  • Increase operational efficiency – If you have compute clusters in two different AWS Regions that analyze the same set of objects, you might choose to maintain object copies in those Regions.
39
Q

When should you use DynamoDB on-demand versus provisioned with auto-scaling?

A

With provisioned capacity you can also use auto scaling to automatically adjust your table’s capacity based on the specified utilization rate to ensure application performance, and also to potentially reduce costs. To configure auto scaling in DynamoDB, set the minimum and maximum levels of read and write capacity in addition to the target utilization percentage.

It is important to note that DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated or depressed for a sustained period of several minutes.

This means that provisioned capacity (with on-demand) is probably best for you if you have relatively predictable application traffic, run applications whose traffic is consistent, and ramps up or down gradually.

Whereas on-demand capacity mode is probably best when you have new tables with unknown workloads, unpredictable application traffic and also if you only want to pay exactly for what you use. The on-demand pricing model is ideal for bursty, new, or unpredictable workloads whose traffic can spike in seconds or minutes, and when under-provisioned capacity would impact the user experience.

40
Q

___________ is a service that provides user identity and access control for mobile applications and web applications.

A

Amazon Cognito

41
Q

____________ is a service designed for online data transfer to and from AWS. Deploying its agent on-premises enables efficient and secure transfers over the network. It automatically verifies data integrity, ensuring that Amazon S3 data matches the source

A

AWS DataSync

42
Q

__________ provides organization-wide visibility into object storage usage, activity trends, and makes actionable recommendations to optimize costs and apply data protection best practices. It allows you to report incomplete multiplepart uploads for cost compliance purposes.

A

S3 Storage Lens

S3 storage lens is specifically designed to obtain information about S3 usage.

43
Q

__________ is a High-performance graph analytics and serverless database for superior scalability and availability?

A

Neptune

Amazon Neptune is a fully managed graph database, and Neptune Streams allows you to capture changes to the database. This option provides a fully managed solution for storing and monitoring database changes, minimizing operational overhead. Both storage and change monitoring are handled by Amazon Neptune.

Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Neptune is a purpose-built, high-performance graph database engine that is optimized for storing billions of relationships and querying the graph with milliseconds of latency.

Amazon Neptune Database is a serverless graph database designed for superior scalability and availability. Neptune Database provides built-in security, continuous backups, and integrations with other AWS services.

44
Q

What feature of Neptune allows you to generate a complete sequence of change-log entries that record every change made to your graph data as it happens?

A

Streams

45
Q

_____________ is a data replication feature designed for ONTAP systems, enabling efficient data replication between primary and secondary systems.

A

NetApp SnapMirror

46
Q

What is Amazon Simple Notification Service (Amazon SNS)?

A

Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications. It is designed to make web-scale computing easier for developers. Amazon SNS follows the “publish-subscribe” (pub-sub) messaging paradigm, with notifications being delivered to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates.

Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates.

A common pattern is to use SNS to publish messages to Amazon SQS message queues to reliably send messages to one or many system components asynchronously.

47
Q

How is Amazon SNS different from Amazon SQS?

A

Amazon Simple Queue Service (SQS) and Amazon SNS are both messaging services within AWS, which provide different benefits for developers. Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need to periodically check or “poll” for updates. Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components. Amazon SQS provides flexibility for distributed components of applications to send and receive messages without requiring each component to be concurrently available.

48
Q

What is SQS?

A

Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be used to decouple sending and receiving components.

Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as dead-letter queues and cost allocation tags.

49
Q

What is AWS Fargate?

A

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). AWS Fargate makes it easy to focus on building your applications by eliminating the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon ECS and Amazon EKS. AWS Fargate makes it easy to scale and manage cloud applications by shifting as much management of the underlying infrastructure resources to AWS so development teams can focus on writing code that solve business problems. Shifting tasks such as server management, resource allocation, and scaling to AWS does not only improve your operational posture, but also accelerates the process of going from idea to production on the cloud and lowers the total cost of ownership (TCO).

50
Q

What is Amazon Aurora?

A

Amazon Aurora is a modern relational database service offering performance and high availability at scale, fully open-source MySQL- and PostgreSQL-compatible editions, and a range of developer tools for building serverless and machine learning (ML)-driven applications.

Aurora features a distributed, fault-tolerant, and self-healing storage system that is decoupled from compute resources and auto-scales up to 128 TiB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon Simple Storage Service (Amazon S3), and replication across three Availability Zones (AZs).

Aurora is also a fully managed service that automates time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups while providing the security, availability, and reliability of commercial databases at one-tenth of the cost.

51
Q

__________- allow you to protect your AWS resources using cryptographic keys outside of AWS. This advanced feature is designed for regulated workloads that you must protect with encryption keys stored in an external key management system that you control.

A

External key stores

52
Q

For specific requirements of high availability with automated failover across all AWS regions and minimizing latency without relying on IP address caching, ____________ is best suited. It provides global anycast IP and automated failover, making it well suited for scenarios where low latency access and high availability are critical.

A

AWS Global Accelerator

53
Q

____________ provides global anycast IP and automated failover, making it well suited for scenarios where low latency access and high availability are critical.

A

AWS Global Accelerator

54
Q

What are the differences between a long-running and a transient EMR cluster?

A

By default, clusters that you create with the console or the AWS CLI are long-running. Long-running clusters continue to run, accept work, and accrue charges until you take action to shut them down.

A long-running cluster is effective in the following situations:

When you need to interactively or automatically query data.
When you need to interact with big data applications hosted on the cluster on an ongoing basis.
When you periodically process a data set so large or so frequently that it is inefficient to launch new clusters and load data each time.
You can also set termination protection on a long-running cluster to avoid shutting down EC2 instances by accident or error.

From the documentation: When you configure termination (e.g., transient cluster) after step execution, the cluster starts, runs bootstrap actions, and then runs the steps that you specify. As soon as the last step completes, Amazon EMR terminates the cluster’s Amazon EC2 instances. Clusters that you launch with the Amazon EMR API have step execution enabled by default. Termination after step execution is effective for clusters that perform a periodic processing task, such as a daily data processing run. Step execution also helps you ensure that you are billed only for the time required to process your data.

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-longrunning-transient.html

55
Q

_____________ is a managed database proxy that provides connection pooling, failover, and security features for database applications. It allows applications to scale more effectively and efficiently by managing database connections on their behalf. It integrates well with RDS and reduces operating expenses.

A

Amazon RDS Proxy

56
Q

_____________ deployments provide high availability for database instances. Automatically replicates the database to a standby instance in a different Availability Zone, ensuring failover in the event of a primary AZ failure.

A

Amazon RDS Multi-AZ (Availability Zone)

57
Q

T or F: AWS Organizations allows you to create tag policies that define which tags should be applied to resources and what values ​​are allowed. This is an effective way to ensure that only approved app names are used as labels.

A

T

AWS Organizations allows you to create tag policies that define which tags should be applied to resources and what values ​​are allowed. This is an effective way to ensure that only approved app names are used as labels. Therefore, creating a tag policy in organizations with a list of allowed application names, is the most appropriate solution to enforce required tag values.

58
Q

_____________ provides a managed solution for rotating database credentials, including built-in support for Amazon RDS. Enables automatic master user password rotation for RDS for PostgreSQL with minimal operational overhead.

A

AWS Secrets Manager

59
Q

Compare Amazon DynamoDB provisioned mode versus on-demand mode.

A

In provisioned capacity mode, you manually provision read and write capacity units based on your known workload. Because the you knows the read and write operations needed, it can provide the exact capacity for those specific periods, optimizing costs by not paying for unused capacity at other times.

On-demand mode in DynamoDB automatically adjusts read and write capacity based on actual usage. However, when the workload is known and occurs during specific periods, the provisioning mode would probably be more cost-effective.

60
Q

___________ uses machine learning to identify unexpected spending patterns and anomalies in your costs. It can automatically detect unusual expenses and send notifications, making it suitable for determine unusual spending by monitoring costs and notifying responsible stakeholders in the event of unusual spending.

A

AWS Cost Anomaly Detection

61
Q

___________ is a serverless data integration service that makes it easier to discover, prepare, move, and integrate (aka ETL) data from multiple sources for analytics, machine learning (ML), and application development.

A

AWS Glue

Use cases
* Simplify ETL pipeline development
- Remove infrastructure management with automatic provisioning and worker management, and consolidate all your data integration needs into a single service.

  • Interactively explore, experiment on, and process data
  • Using AWS Glue interactive sessions, data engineers can interactively explore and prepare data using the integrated development environment (IDE) or notebook of their choice.
  • Discover data efficiently
    Quickly identify data across AWS, on premises, and other clouds, and then make it instantly available for querying and transforming.
  • Support various processing frameworks and workloads
  • More easily support various data processing frameworks, such as ETL and ELT, and various workloads, including batch, micro-batch, and streaming.
62
Q

_________ is used to Track user activity and API usage on AWS and in hybrid and multicloud environments

A

AWS CloudTrail

63
Q

What is used to build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows?

A

Amazon SageMaker

Amazon SageMaker is a fully managed service that brings together a broad set of tools to enable high-performance, low-cost machine learning (ML) for any use case. With SageMaker, you can build, train and deploy ML models at scale using tools like notebooks, debuggers, profilers, pipelines, MLOps, and more – all in one integrated development environment (IDE).

You can build your own FMs, large models that were trained on massive datasets, with purpose-built tools to fine-tune, experiment, retrain, and deploy FMs. SageMaker offers access to hundreds of pretrained models, including publicly available FMs, that you can deploy with just a few clicks.

64
Q

What is the difference between geolocation & geoproximity in Route 53?

A

Geolocation routing policy — Use when you want to route traffic based on the location of users. Geo-proximity routing policy — Use when you want to route traffic based on the location of your resources and optionally switch resource traffic at one location to resources elsewhere.

65
Q

T / F
AWS CloudTrail provides monitoring and observability for containerized applications. It’s the preferred option to Collect metrics from EKS clusters, providing information on resource utilization and application performance.

A

F

AWS CloudTrail focuses more on auditing and logging API calls rather than providing real-time performance monitoring or tracking. While it may be useful for security and compliance, it may not be best suited for observability and performance monitoring. QuickSight could be used for visualization, but is not specifically designed for application performance monitoring.

However, Amazon CloudWatch Container Insights provides monitoring and observability for containerized applications. Collects metrics from EKS clusters, providing information on resource utilization and application performance.

66
Q

T / F
AWS X-Ray helps track requests as they flow through different microservices, helping to identify bottlenecks and performance issues.

A

T

AWS X-Ray helps track requests as they flow through different microservices, helping to identify bottlenecks and performance issues.

67
Q

T / F
Amazon CloudWatch Container Insights provides monitoring and observability for containerized applications. Collects metrics from EKS clusters, providing information on resource utilization and application performance.

A

T

Amazon CloudWatch Container Insights provides monitoring and observability for containerized applications. Collects metrics from EKS clusters, providing information on resource utilization and application performance.

68
Q

_____________ can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. This feature for S3 can help customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the Internet.

A

Amazon S3 Transfer Acceleration

S3 Transfer Acceleration (S3TA) reduces the variability in Internet routing, congestion and speeds that can affect transfers, and logically shortens the distance to S3 for remote applications. S3TA improves transfer performance by routing traffic through Amazon CloudFront’s globally distributed Edge Locations and over AWS backbone networks, and by using network protocol optimizations.

69
Q

What service will help you Build event-driven applications at scale across AWS, existing systems, or SaaS applications?

A

Amazon EventBridge

EventBridge is a fully managed service that simplifies event-driven architectures and reduces operational complexity by directly invoking event-driven Lambda functions, minimizing the need for additional components and manual management.

70
Q

________ supports feature flags and dynamic configurations helping software builders quickly and securely adjust application behavior in production environments without full code deployments. Further, it helps speed up software release frequency, improve application resiliency, and address emergent issues more quickly.

A

AppConfig

AWS AppConfig is a capability of AWS Systems Manager.

With feature flags, you can gradually release new capabilities to users and measure the impact of those changes before fully deploying the new capabilities to all users. With operational flags and dynamic configurations, you can update block lists, allow lists, throttling limits, logging verbosity, and perform other operational tuning to quickly respond to issues in production environments.

71
Q

___________ helps you to securely encrypt, store, and retrieve credentials for your databases and other services. Instead of hardcoding credentials in your apps, you can make calls to this product to retrieve your credentials whenever needed.

A

AWS Secrets Manager

Secrets Manager helps you protect access to your IT resources and data by enabling you to rotate and manage access to your secrets.

72
Q

AWS simplifies the process of generating, distributing, and rotating digital certificates with _____________, which offers publicly trusted certificates at no cost that can be used in AWS services that require them to terminate TLS connections to the Internet.

A

AWS Certificate Manager (ACM)

ACM also offers the ability to create a private certificate authority to automatically generate, distribute and rotate certificates to secure internal communication among customer-managed infrastructure.

73
Q

What is a benefit of using a Network Load Balancer instead of a Classic Load Balancer?

A

Support for static IP addresses for the load balancer. https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html

74
Q

What is the difference between Resource-based policies & Identity-based policies?

A
  • Resource-based policies are used to define permissions on resources, such as S3 buckets or Lambda functions, not for IAM user groups.
  • Identity-based policies are attached directly to IAM users, groups, or roles. For example, attaching the “AdministratorAccess” policy directly to the IAM user group ensures that all users within that group inherit the permissions.
75
Q

What messaging services’ queues provide message processing and orderly delivery of messages exactly once? Using these queues ensures that messages are processed in the order they are received and delivered exactly once.

A

Amazon SQS FIFO (First-First In-Out)

76
Q

AWS SNS v. SQS

A

SNS: push-based; many-to-many; app to app or app to person; does not provide: persistence, reliability
or retries, nor does it batch

SQS: poll-based; many to one; app to app but NOT app to person; provides persistence, reliability/retries, and batches

The main difference lies in the foundation of the services. SQS is poll-based and SNS is a push-based service. That means SNS is simply forwarding all messages to your subscribed consumers, whereas SQS (poll-based) saves the messages in a queue and waits till they get picked up.

SNS is typically used for applications that need realtime notifications, while SQS is more suited for message processing use cases. SNS does not persist messages - it delivers them to subscribers that are present, and then deletes them. In comparison, SQS can persist messages (from 1 minute to 14 days).

77
Q

CloudFront vs. Global Accelerator

A

CloudFront uses Edge Locations to cache content while Global Accelerator uses Edge Locations to find an optimal pathway to the nearest regional endpoint. CloudFront is designed to handle HTTP protocol meanwhile Global Accelerator is best used for both HTTP and non-HTTP protocols such as TCP and UDP. Amazon CloudFront is Amazon’s Content Delivery Network (CDN). To use this service, customers create a CloudFront distribution, configure their origin (any origin that has a publicly accessible domain name), attach a valid TLS certificate using Amazon Certificate Manager, and then configure their authoritative DNS server to point their web application’s domain name to the distribution’s generated domain name.

AWS Global Accelerator is a networking service that improves an application’s performance and availability for global users. Amazon CloudFront is a cloud distributed networking service for web applications that provides low latency and speed. AWS Global Accelerator is a networking service that improves the performance, reliability and security of your online applications using AWS Global Infrastructure. AWS Global Accelerator can be deployed in front of your Network Load Balancers, Application Load Balancers, AWS EC2 instances, and Elastic IPs, any of which could serve as Regional endpoints for your application.

Typically, customers use CloudFront in use cases such as:

  • Full website delivery. Olx uses CloudFront to deliver their e-commerce brands across their worldwide markets.
  • API protection and acceleration. Slack halved their API response times globally by leveraging CloudFront as a reverse proxy.
    Adaptive video streaming (VoD/Live). M6 uses CloudFront to improve video playback quality in terms of startup time, bitrate, and buffering rates.
  • Software download. Nordcurrent uses CloudFront to offload their infrastructure thanks to caching, and gamer wait times have been reduced up to 90%.

Example GA use cases include:

  • UDP/TCP based Multi-player gaming. JoyCity saw network timeouts dropping by a factor of 8 in some countries thanks to Global Accelerator.
  • Voice and Video over IP. CrazyCall uses Global Accelerator to ensure that their customers get the best quality of service.
  • IoT. BBPOS improved latency by an average of 25% using Global Accelerator’s fixed Ips to ingest data from mobile point-of-sale devices.
    Video ingest and FTP uploads. FlowPlayer uses AWS Global Accelerator to improve the performance and availability of video ingest for their users around the world.
  • Other use cases include VPN, Git, and AdTech bidding. Customers can also consider Global Accelerator for HTTP workloads such as non-cacheable APIs in specific scenarios (described further down in this section).
78
Q

What is the key difference between RDS and DynamoDB?

A

RDS allows you to set up, operate, and scale relational (SQL) databases on AWS, with several instance types available. AWS DynamoDB is a serverless solution that automatically scales tables to adjust for capacity, with no administration necessary on your behalf.

79
Q

What is Amazon RDS Proxy?

A

Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure.

80
Q

What is DynamoDB Accelerator?

A

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available caching service built for Amazon DynamoDB and only DynamoDB. DAX delivers up to a 10 times performance improvement—from milliseconds to microseconds—even at millions of requests per second.

DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management.

81
Q

Amazon Redshift is a _________.

A

Data Warehouse Solution

82
Q

RDS

A

Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Choose from seven popular engines — Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server — and deploy on-premises with Amazon RDS on AWS Outposts.

83
Q

Amazon Aurora

A

Amazon Aurora is a modern relational database service offering performance and high availability at scale, fully open-source MySQL- and PostgreSQL-compatible editions, and a range of developer tools for building serverless and machine learning (ML)-driven applications.

Aurora is also a fully managed service that automates time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups while providing the security, availability, and reliability of commercial databases at one-tenth of the cost.

84
Q

______________ is a visual data preparation tool that makes it easier for data analysts and data scientists to clean and normalize data to prepare it for analytics and machine learning (ML).

A

AWS Glue DataBrew

You can choose from over 250 prebuilt transformations to automate data preparation tasks, all without the need to write any code. You can automate filtering anomalies, converting data to standard formats and correcting invalid values, and other tasks.

85
Q

______________ handles user authentication and authorization for your web and mobile apps. With user pools, you can easily and securely add sign-up and sign-in functionality to your apps. With identity pools (federated identities), your apps can get temporary credentials that grant users access to specific AWS resources, whether the users are anonymous or are signed in.

A

Amazon Cognito

86
Q

What is Lambda Provisioned concurrency used for?

A

Provisioned concurrency can help maintain low latency by pre-warming Lambda functions.

87
Q

What is Lambda reserved concurrency used for?

A

Reserved concurrency helps control costs by limiting the number of concurrent executions.

88
Q
A