AWS Certified Cloud Practitioner Practice Exam (2) Flashcards
(65 cards)
Which of the following EC2 instance purchasing options supports the Bring Your Own License (BYOL) model for almost every BYOL scenario?
1) Dedicated Instances
2) On-demand Instances
3) Reserved Instances
4) Dedicated Hosts
Dedicated Hosts
You have a variety of options for using new and existing Microsoft software licenses on the AWS Cloud. By purchasing Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Relational Database Service (Amazon RDS) license-included instances, you get new, fully compliant Windows Server and SQL Server licenses from AWS. The BYOL model enables AWS customers to use their existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server.
Your existing licenses may be used on AWS with Amazon EC2 Dedicated Hosts, Amazon EC2 Dedicated Instances or EC2 instances with default tenancy using Microsoft License Mobility through Software Assurance. Dedicated Hosts provide additional control over your instances and visibility into Host level resources and tooling that allows you to manage software that consumes licenses on a per-core or per-socket basis, such as Windows Server and SQL Server. This is why most BYOL scenarios are supported through the use of Dedicated Hosts, while only certain scenarios are supported by Dedicated Instances.
Your company is designing a new application that will store and retrieve photos and videos. Which of the following services should you recommend as the underlying storage mechanism?
1) Amazon S3
2) Amazon SQS
3) Amazon Instance store
4) Amazon EBS
Amazon S3
Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. It is a storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.
Common use cases of Amazon S3 include:
Media Hosting – Build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads.
Backup and Storage – Provide data backup and storage services for others.
Hosting static websites – Host and manage static websites quickly and easily.
Deliver content globally - Use S3 in conjunction with CloudFront to distribute content globally with low latency.
Hybrid cloud storage - Create a seamless connection between on-premises applications and Amazon S3 with AWS Storage Gateway in order to reduce your data center footprint, and leverage the scale, reliability, and durability of AWS.
Your application has recently experienced significant global growth, and international users are complaining of high latency. What is the AWS characteristic that can help improve your international users’ experience?
1) Elasticity
2) High availability
3) Data durability
4) Global reach
Global reach
With AWS, you can deploy your application in multiple regions around the world. The user will be redirected to the Region that provides the lowest possible latency and the highest performance. You can also use the CloudFront service that uses edge locations (which are located in most of the major cities across the world) to deliver content with low latency and high performance to your global users.
Which of the following are important design principles you should adopt when designing systems on AWS? (Choose TWO)
1) Remove single points of failure
2) Always choose to pay as you go
3) Automate wherever possible
4) Always use Global Services in your architecture rather than Regional Services
5) Treat servers as fixed resources
1) Remove single points of failure
3) Automate wherever possible
A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working. You can remove single points of failure by assuming everything will fail and designing your architecture to automatically detect and react to failures. For example, configuring and deploying an auto-scaling group of EC2 instances will ensure that if one or more of the instances crashes, Auto-scaling will automatically replace them with new instances. You should also introduce redundancy to remove single points of failure, by deploying your application across multiple Availability Zones. If one Availability Zone goes down for any reason, the other Availability Zones can serve requests.
AWS helps you use automation so you can build faster and more efficiently. Using AWS services, you can automate manual tasks or processes such as deployments, development & test workflows, container management, and configuration management.
AWS has created a large number of Edge Locations as part of its Global Infrastructure. Which of the following is NOT a benefit of using Edge Locations?
1) Edge locations are used by CloudFront to distribute content to global users with low latency
2) Edge locations are used by CloudFront to cache the most recent responses
3) Edge locations are used by CloudFront to improve your end users’ experience when uploading files
4) Edge locations are used by CloudFront to distribute traffic across multiple instances to reduce latency
Edge locations are used by CloudFront to distribute traffic across multiple instances to reduce latency
AWS Edge Locations are not used to distribute traffic. Edge Locations are used in conjunction with the CloudFront service to cache common responses and deliver content to end-users with low latency.
With Amazon CloudFront, your users can also benefit from accelerated content uploads. As the data arrives at an edge location, data is routed to AWS storage services over an optimized network path.
The AWS service that is used to distribute load is the AWS Elastic Load Balancing (ELB) service.
Using Amazon RDS falls under the shared responsibility model. Which of the following are customer responsibilities? (Choose TWO)
1) Building the relational database schema
2) Managing the database settings
3) Installing the database software
4) Performing backups
5) Patching the database software
1) Building the relational database schema
2) Managing the database settings
Amazon RDS manages the work involved in setting up a relational database, from provisioning the infrastructure capacity you request to installing the database software. Once your database is up and running, Amazon RDS automates common administrative tasks such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover. Since Amazon RDS provides native database access, you interact with the relational database software as you normally would. This means you’re still responsible for managing the database settings that are specific to your application. You’ll need to build the relational schema that best fits your use case and are responsible for any performance tuning to optimize your database for your application’s workflow.
What are the connectivity options that can be used to build hybrid cloud architectures? (Choose TWO)
1) AWS Cloud9
2) AWS VPN
3) AWS CloudTrail
4) AWS Artifact
5) AWS Direct Connect
2) AWS VPN
5) AWS Direct Connect
In cloud computing, hybrid cloud refers to the use of both on-premises resources in addition to public cloud resources. A hybrid cloud enables an organization to migrate applications and data to the cloud, extend their datacenter capacity, utilize new cloud-native capabilities, move applications closer to customers, and create a backup and disaster recovery solution with cost-effective high availability. By working closely with enterprises, AWS has developed the industry’s broadest set of hybrid capabilities across storage, networking, security, application deployment, and management tools to make it easy for you to integrate the cloud as a seamless and secure extension of your existing investments.
AWS Virtual Private Network solutions establish secure connections between your on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to AWS. AWS Client VPN enables you to securely connect users (from any location) to AWS or on-premises networks. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity. AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your on-premises network or branch office site and Amazon VPC. AWS Direct Connect is a network service that provides an alternative to using the Internet to connect customer's on-premise sites to AWS. Using AWS Direct Connect, data that would have previously been transported over the Internet can now be delivered through a private network connection between AWS and your datacenter or corporate network. Companies of all sizes use AWS Direct Connect to establish private connectivity between AWS and datacenters, offices, or colocation environments. Compared to AWS VPN (Internet-based connection), AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience.
Additional information:
Besides the connectivity options that AWS provides, AWS provides many features to support building more efficient hybrid cloud architectures. For example, AWS Identity and Access Management (IAM) can grant your employees and applications access to the AWS Management Console and AWS service APIs using your existing corporate identity systems. AWS IAM supports federation from corporate systems like Microsoft Active Directory, as well as external Web Identity Providers like Google and Facebook.
Which of the following AWS services is designed with native Multi-AZ fault tolerance in mind? (Choose TWO)
1) Amazon Simple Storage Service
2) Amazon EBS
3) Amazon EC2
4) Amazon DynamoDB
5) AWS Snowball
1) Amazon Simple Storage Service
4) Amazon DynamoDB
The Multi-AZ principle involves deploying an AWS resource in multiple Availability Zones to achieve high availability for that resource.
DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in fault tolerance in the event of a server failure or Availability Zone outage.
Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Data in all Amazon S3 storage classes is redundantly stored across multiple Availability Zones (except S3 One Zone-IA and S3 Express One Zone).
Jessica is managing an e-commerce web application in AWS. The application is hosted on six EC2 instances. One day, three of the instances crashed; but none of her customers were affected. What has Jessica done correctly in this scenario?
1) She has properly built a scalable system
2) She has properly built an encrypted system
3) She has properly built an elastic system
4) She has properly built a fault tolerant system
She has properly built a fault tolerant system
Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of some (one or more faults within) of its components. Visitors to a website expect the website to be available irrespective of when they visit. For example, when someone wants to visit Jessica’s website to purchase a product, whether it is at 9:00 AM on a Monday or 3:00 PM on holiday, he\she expects that the website will be available and ready to accept his\her purchase. Failing to meet these expectations can cause loss of business and contribute to the development of a negative reputation for the website owner, resulting in lost revenue.
What is the AWS service that provides you the highest level of control over the underlying virtual infrastructure?
1) Amazon RDS
2) Amazon EC2
3) Amazon DynamoDB
4) Amazon Redshift
Amazon EC2
Amazon EC2 provides you the highest level of control over your virtual instances, including root access and the ability to interact with them as you would any machine.
Amazon S3 Glacier Flexible Retrieval is an Amazon S3 storage class that is suitable for storing ____________ & ______________. (Choose TWO)
1) Long-term analytic data
2) Cached data
3) Active archives
4) Active databases
5) Dynamic websites’ assets
1) Long-term analytic data
3) Active archives
Which AWS Service can be used to establish a dedicated, private network connection between AWS and your datacenter?
1) AWS Snowball
2) AWS Direct Connect
3) Amazon CloudFront
4) Amazon Route 53
AWS Direct Connect
AWS Direct Connect is used to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or co-location environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
Which of the following should be considered when performing a TCO analysis to compare the costs of running an application on AWS instead of on-premises?
1) Physical hardware
2) Application development
3) Market research
4) Business analysis
Physical hardware
Weighing the financial considerations of owning and operating a data center facility versus employing a cloud infrastructure requires detailed and careful analysis. The Total Cost of Ownership (TCO) is often the financial metric used to estimate and compare costs of a product or a service. When comparing AWS with on-premises TCO, customers should consider all costs of owning and operating a data center. Examples of these costs include facilities, physical servers, storage devices, networking equipment, cooling and power consumption, data center space, and Labor IT cost.
Which statement best describes the operational excellence pillar of the AWS Well-Architected Framework?
1) The ability to monitor systems and improve supporting processes and procedures
2) The efficient use of computing resources to meet requirements
3) The ability to manage datacenter operations more efficiently
4) The ability of a system to recover gracefully from failure
The ability to monitor systems and improve supporting processes and procedures
The 6 Pillars of the AWS Well-Architected Framework:
1- Operational Excellence: The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.
2- Security: The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
3- Reliability: The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
4- Performance Efficiency: The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.
5- Cost Optimization: The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or sub-optimal resources.
6- Sustainability: The discipline of sustainability addresses the long-term environmental, economic, and societal impact of your business activities. Your business or organization can have negative environmental impacts like direct or indirect carbon emissions, unrecyclable waste, and damage to shared resources like clean water. When building cloud workloads, the practice of sustainability is understanding the impacts of the services used, quantifying impacts through the entire workload lifecycle, and applying design principles and best practices to reduce these impacts.
Additional information:
Creating a software system is a lot like constructing a building. If the foundation is not solid, structural problems can undermine the integrity and function of the building. When architecting technology solutions on Amazon Web Services (AWS), if you neglect the five pillars of operational excellence, security, reliability, performance efficiency, and cost optimization, it can become challenging to build a system that delivers on your expectations and requirements. Incorporating these pillars into your architecture helps produce stable and efficient systems. This allows you to focus on the other aspects of design, such as functional requirements. The AWS Well-Architected Framework helps cloud architects build the most secure, high-performing, resilient, and efficient infrastructure possible for their applications.
A company is developing a new application using a microservices framework. The new application is having performance and latency issues. Which AWS Service should be used to troubleshoot these issues?
1) AWS CloudTrail
2) Amazon Inspector
3) AWS CodePipeline
4) AWS X-Ray
AWS X-Ray
AWS X-Ray helps developers analyze and debug distributed applications in production or under development, such as those built using microservice architecture. With X-Ray, you can understand how your application and its underlying services are performing so you can identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
In your on-premises environment, you can create as many virtual servers as you need from a single template. What can you use to perform the same in AWS?
1) IAM
2) EBS Snapshot
3) AMI
4) An internet gateway
AMI
An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications). This pre-configured template save time and avoid errors when configuring settings to create new instances. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.
Which statement is correct with regards to AWS service limits? (Choose TWO)
1) You can contact AWS support to increase the service limits
2) The Amazon Simple Email Service is responsible for sending email notifications when usage approaches a service limit
3) There are no service limits on AWS
4) Each IAM user has the same service limits
5) You can use the AWS Trusted Advisor to monitor your service limits
1) You can contact AWS support to increase the service limits
5) You can use the AWS Trusted Advisor to monitor your service limits
Service limits, also referred to as Service quotas, are the maximum number of service resources or operations that apply to an AWS account. Understanding your service limits (and how close you are to them) is an important part of managing your AWS deployments – continuous monitoring allows you to request limit increases or shut down resources before the limit is reached. One of the easiest ways to do this is via AWS Trusted Advisor’s Service Limit Dashboard.
AWS maintains service limits (quotas) for each account to help guarantee the availability of AWS resources, as well as to minimize billing risks for new customers. Some service quotas are raised automatically over time as you use AWS, though most AWS services require that you request quotas increases manually. You can request a quota increase using the Service Quotas console or AWS CLI. AWS Support might approve, deny, or partially approve your requests.
Which of the following are perspectives of the AWS Cloud Adoption Framework (AWS CAF)? (Choose TWO)
1) Sustainability
2) Governance
3) Operational Excellence
4) People
5) Performance Efficiency
2) Governance
4) People
The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. AWS CAF identifies specific organizational capabilities that underpin successful cloud transformations. These capabilities provide best practice guidance that helps you improve your cloud readiness. AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations. Each perspective comprises a set of capabilities that functionally related stakeholders own or manage in the cloud transformation journey.
AWS CAF perspectives: (IMPORTANT)
Business perspective helps ensure that your cloud investments accelerate your digital transformation ambitions and business outcomes.
People perspective serves as a bridge between technology and business, accelerating the cloud journey to help organizations more rapidly evolve to a culture of continuous growth, learning, and where change becomes business-as-normal, with focus on culture, organizational structure, leadership, and workforce.
Governance perspective helps you orchestrate your cloud initiatives while maximizing organizational benefits and minimizing transformation-related risks.
Platform perspective helps you build an enterprise-grade, scalable, hybrid cloud platform, modernize existing workloads, and implement new cloud-native solutions.
Security perspective helps you achieve the confidentiality, integrity, and availability of your data and cloud workloads.
Operations perspective helps ensure that your cloud services are delivered at a level that meets the needs of your business.
What is the AWS tool that enables you to use scripts to manage all AWS services and resources?
1) AWS Service Catalog
2) AWS Console
3) Amazon FSx
4) AWS CLI
AWS CLI
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
An organization runs many systems and uses many AWS products. Which of the following services enables them to control how each developer interacts with these products?
1) AWS Identity and Access Management
2) Amazon EMR
3) Network Access Control Lists
4) Amazon RDS
AWS Identity and Access Management
AWS Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users and applications can access.
An organization needs to analyze and process a large number of data sets. Which AWS service should they use?
1) Amazon SNS
2) Amazon SQS
3) Amazon MQ
4) Amazon EMR
Amazon EMR
Amazon EMR (Amazon Elastic MapReduce) is a managed service that helps you analyze and process large volumes of data by distributing computational tasks across a cluster of virtual servers in the AWS Cloud. Amazon EMR supports a range of big data frameworks, including Apache Spark, Apache Hive, and Presto, enabling you to perform large-scale data processing, analytics, and machine learning. Amazon EMR is designed to minimize the complexity of setup, management, and tuning for these frameworks, allowing you to focus on data analysis rather than infrastructure.
Which of the following activities may help reduce your AWS monthly costs? (Choose TWO)
1) Removing all of your Cost Allocation Tags
2) Creating a lifecycle policy to move infrequently accessed data to less expensive storage tiers
3) Deploying your AWS resources across multiple Availability Zones
4) Enabling Amazon EC2 Auto Scaling for all of your workloads
5) Using the AWS Network Load Balancer (NLB) to load balance the incoming HTTP requests
2) Creating a lifecycle policy to move infrequently accessed data to less expensive storage tiers
4) Enabling Amazon EC2 Auto Scaling for all of your workloads
Amazon EC2 Auto Scaling monitors your applications and automatically adjusts capacity (up or down) to maintain steady, predictable performance at the lowest possible cost. When demand drops, Amazon EC2 Auto Scaling will automatically remove any excess capacity so you avoid overspending. When demand increases, Amazon EC2 Auto Scaling will automatically add capacity to maintain performance.
For Amazon S3 and Amazon EFS, you can create a lifecycle policy to automatically move infrequently accessed data to less expensive storage tiers. In order to reduce your Amazon S3 costs, you should create a lifecycle policy to automatically move old (or infrequently accessed) files to less expensive storage tiers such as Amazon Glacier, or to automatically delete them after a specified duration. Similarly, you can create an Amazon EFS lifecycle policy to automatically move less frequently accessed data to less expensive storage tiers such as Amazon EFS Standard-Infrequent Access (EFS Standard-IA) and Amazon EFS One Zone-Infrequent Access (EFS One Zone-IA). Amazon EFS Infrequent Access storage classes provide price/performance that is cost-optimized for files not accessed every day, with storage prices up to 92% lower compared to Amazon EFS Standard (EFS Standard) and Amazon EFS One Zone (EFS One Zone) storage classes respectively.
Savings Plans are available for which of the following AWS compute services? (Choose TWO)
1) AWS Lambda
2) Amazon EC2
3) AWS Outposts
4) AWS Batch
5) Amazon Lightsail
1) AWS Lambda
2) Amazon EC2
Savings Plans are a flexible pricing model that offers low prices on Amazon EC2, Lambda, Fargate, and Amazon SageMaker usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. When you sign up for Savings Plans, you will be charged the discounted Savings Plans price for your usage up to your commitment. For example, if you commit to $10 of compute usage an hour, you will get the Savings Plans prices on that usage up to $10 and any usage beyond the commitment will be charged On Demand rates.
Additional information:
What is the difference between Amazon EC2 Savings Plans and Amazon EC2 Reserved instances?
Reserved Instances are a billing discount applied to the use of On-Demand Compute Instances in your account. These On-Demand Instances must match certain attributes, such as instance type and Region to benefit from the billing discount.
For example, let say you have a t2.medium instance running as an On-Demand Instance and you purchase a Reserved Instance that matches the configuration of this particular t2.medium instance. At the time of purchase, the billing mode for the existing instance changes to the Reserved Instance discounted rate. The existing t2.medium instance doesn’t need replacing or migrating to get the discount.
After the reservation expires, the instance is charged as an On-Demand Instance. You can repurchase the Reserved Instance to continue the discounted rate on your instance. Reserved Instances act as an automatic discount on new or existing On-Demand Instances in your account.
Savings Plans also offer significant savings on your Amazon EC2 costs compared to On-Demand Instance pricing. With Savings Plans, you make a commitment to a consistent usage amount, measured in USD per hour. This provides you with the flexibility to use the instance configurations that best meet your needs, instead of making a commitment to a specific instance configuration (as is the case with reserved instances). For example, with Compute Savings Plans, if you commit to $10 of compute usage an hour, you can use as many instances as you need (of any type and in any Region) and you will get the Savings Plans prices on that usage up to $10 and any usage beyond the commitment will be charged On Demand rates.
What is the primary storage service used by Amazon RDS database instances?
1) Amazon S3
2) AWS Storage Gateway
3) Amazon EBS
4) Amazon FSx
Amazon EBS
DB instances for Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, IBM Db2, and Microsoft SQL Server use Amazon Elastic Block Store (Amazon EBS) volumes for database and log storage.
Additional information:
EBS volumes are performant for your most demanding workloads, including mission-critical applications such as SAP, Oracle, and Microsoft products. Amazon EBS scales with your performance needs, whether you are supporting millions of gaming customers or billions of e-commerce transactions. A broad range of workloads, such as relational databases (including Amazon RDS databases) and non-relational databases (including Cassandra and MongoDB), enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.