Chapter 3 - Storage: Amazon Simple Storage Services (S3), EBS, EFS, Storage Gateway, Snoball, FSX, DATASYNC Flashcards

1
Q

What are some of the key characteristics of Amazon S3? Choose 3.

  1. Data is stored as objects within resources called “buckets”
  2. With S3 Versioning, you can easily preserve, retrieve, and restore every version of an object stored in Amazon S3
  3. With S3 Cross-Region Replication (CRR), you can replicate objects (and their respective metadata and object tags) into other AWS Regions
  4. S3 can be attached to an EC2 instance to provide block storage
A
  1. Data is stored as objects within resources called “buckets”
  2. With S3 Versioning, you can easily preserve, retrieve, and restore every version of an object stored in Amazon S3
  3. With S3 Cross-Region Replication (CRR), you can replicate objects (and their respective metadata and object tags) into other AWS Regions
  4. S3 can be attached to an EC2 instance to provide block storage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are storage classes provided by S3? Choose 3.

  1. Standard, Intelligent-Tiering, Standard-Infrequent Access
  2. One Zone-Infrequent Access (One Zone-IA), Glacier (S3 Glacier)
  3. Glacier Deep Archive
  4. Elastic File System and Elastic Block Storage
  5. Storage Gateway
A
  1. Standard, Intelligent-Tiering, Standard-Infrequent Access
  2. One Zone-Infrequent Access (One Zone-IA), Glacier (S3 Glacier)
  3. Glacier Deep Archive
  4. Elastic File System and Elastic Block Storage
  5. Storage Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following are use cases for using Amazon Simple Storage Service (Amazon S3)? Choose 4.

  1. File System - mounted to EC2 instance
  2. For Backup and Storage
  3. To Provide application hosting services that deploy, install, and manage web applications
  4. To build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads.
  5. To host software applications that customers can download
A
  1. File System - mounted to EC2 instance
  2. For Backup and Storage
  3. To Provide application hosting services that deploy, install, and manage web applications
  4. To build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads.
  5. To host software applications that customers can download
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A construction engineering company wants to leverage cloud storage to store their large architecture blueprints files which are saved as pdfs in network shared folder through a collaboration application between project team members. The blueprint files for the active projects should be accessible fast while files pertaining to completed projects are not accessed frequently. What is the best cloud storage solution for them which will work with existing application?

  1. Store latest project files in s3-Standard and store files more than one month old S3-IA. Create a life cycle policy to move files accordingly.
  2. Install an AWS Storage volume gateway in cached mode.
  3. Install an AWS Storage volume gateway in stored mode.
  4. Install AWS Storage File Gateway.
A
  1. Store latest project files in s3-Standard and store files more than one month old S3-IA. Create a life cycle policy to move files accordingly.
  2. Install an AWS Storage volume gateway in cached mode.
  3. Install an AWS Storage volume gateway in stored mode.
  4. Install AWS Storage File Gateway.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following statements are correct for Amazon S3 Data consistency model?

  1. A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the change is fully propagated, the object might not appear in the list.
  2. A process replaces an existing object and immediately attempts to read it. Until the change is fully propagated, Amazon S3 might return the prior data.
  3. A process deletes an existing object and immediately attempts to read it. Until the deletion is fully propagated, Amazon S3 might return the deleted data.
  4. A process deletes an existing object and immediately lists keys within its bucket. Until the deletion is fully propagated, Amazon S3 might list the deleted object.
  5. All of the above
A
  1. A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the change is fully propagated, the object might not appear in the list.
  2. A process replaces an existing object and immediately attempts to read it. Until the change is fully propagated, Amazon S3 might return the prior data.
  3. A process deletes an existing object and immediately attempts to read it. Until the deletion is fully propagated, Amazon S3 might return the deleted data.
  4. A process deletes an existing object and immediately lists keys within its bucket. Until the deletion is fully propagated, Amazon S3 might list the deleted object.
  5. All of the above
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Jason creates a S3 bucket ‘mywestwebsite’ in ‘us-west-1’ region. Which of these are correct url to access this bucket? Choose 3.

  1. https://amazonaws. s3.us-west-1.com/mywestwebsite
  2. https://s3.us-west-1.amazonaws.com/mywestwebsite
  3. https://s3.amazonaws.com/mywestwebsite
  4. https://mywestwebsite.s3.amazonaws.com
  5. https://mywestwebsite.s3.us-west-1.amazonaws.com
A
  1. https://amazonaws. s3.us-west-1.com/mywestwebsite
  2. https://s3.us-west-1.amazonaws.com/mywestwebsite
  3. https://s3.amazonaws.com/mywestwebsite
  4. https://mywestwebsite.s3.amazonaws.com
  5. https://mywestwebsite.s3.us-west-1.amazonaws.com
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Jason creates a S3 bucket ‘myeastwebsite’ in ‘us-east-1’ region. Which of these are correct url to access this bucket? Choose 4.

  1. https://amazonaws.s3.us-east-1.com/myeastwebsite
  2. https://s3.us-east-1.amazonaws.com/myeastwebsite
  3. https://s3.amazonaws.com/myeastwebsite
  4. https://myeastwebsite.s3.amazonaws.com
  5. https://myeastwebsite.s3.us-east-1.amazonaws.com
A
  1. https://amazonaws.s3.us-east-1.com/myeastwebsite
  2. https://s3.us-east-1.amazonaws.com/myeastwebsite
  3. https://s3.amazonaws.com/myeastwebsite
  4. https://myeastwebsite.s3.amazonaws.com
  5. https://myeastwebsite.s3.us-east-1.amazonaws.com
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Based on the following Amazon S3 URL of an object, which of the following statement are correct? Choose 2.

https://mywestwebsite.s3.amazonaws.com/photos/whale.jpg

  1. “whale.jpg” is stored in the folder “photos” inside the bucket “mywestwebsite”.
  2. The key of the object will be “photos/whale.jpg”
  3. The key of the object will be “whale.jpg”
  4. The object “whale.jpg” is stored in the main bucket folder “mywestwebsite”
A
  1. “whale.jpg” is stored in the folder “photos” inside the bucket “mywestwebsite”.
  2. The key of the object will be “photos/whale.jpg”
  3. The key of the object will be “whale.jpg”
  4. The object “whale.jpg” is stored in the main bucket folder “mywestwebsite”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

John Smith has an object saved in S3 with attribute value “color=violet”. He updates the object with attribute value to “color=red”. He GET the object after 2 seconds and reads the attribute value of color. What will be the value?

  1. The value will be “ violet”
  2. The value will be “red”
  3. The value can be either “ violet” or “red”
  4. He will get 404 object not found error.
A
  1. The value will be “ violet”
  2. The value will be “red”
  3. The value can be either “ violet” or “red”
  4. He will get 404 object not found error.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Agrim uses S3 to store all his personal photos. He has a bucket name “personalgallery” in us-east-1 region. After he came back from a vacation in Alaska, he uploads all camera snaps in his laptop desktop folder “alaskaphotos”. The photos have file name photo1.jpg, photo2.jpg etc. He logs into his AWS account and opens the S3 console. He then drags the desktop folder “alaskaphotos” inside the “personalgallery” bucket to upload files. Which of the following is correct? Choose 2.

  1. All the snap files photo1.jpg, photo2.jpg etc. will be visible in the S3 console inside the main bucket folder “personalgallery”
  2. All the snap files photo1.jpg, photo2.jpg etc. will be visible in the S3 console inside another folder “alaskaphotos” under the main bucket folder “personalgallery”
  3. The key name of the photos files will be “photo1.jpg” “photo2.jpg” etc.
  4. The key name of the photos files will be “/alaskaphotos/photo1.jpg” “/alaskaphotos/photo2.jpg” etc.
A
  1. All the snap files photo1.jpg, photo2.jpg etc. will be visible in the S3 console inside the main bucket folder “personalgallery”
  2. All the snap files photo1.jpg, photo2.jpg etc. will be visible in the S3 console inside another folder “alaskaphotos” under the main bucket folder “personalgallery”
  3. The key name of the photos files will be “photo1.jpg” “photo2.jpg” etc.
  4. The key name of the photos files will be “/alaskaphotos/photo1.jpg” “/alaskaphotos/photo2.jpg” etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

John hosts his personal blog website as static website on S3. The bucket name he uses to store his website files is ‘west-bucket’ in ‘us-west-2’ region. The photos are uploaded under the main bucket folder using the S3 console. What is the url of john’s static website?

  1. A. http:// s3-us-west-2.amazonaws.com/ west-bucket
  2. B. http://west-bucket.s3-us-west-2.amazonaws.com/
  3. C. http://west-bucket.s3-website-us-west-2.amazonaws.com/
  4. D. http:// s3-website-us-west-2.amazonaws.com/west-bucket
A
  1. A. http:// s3-us-west-2.amazonaws.com/ west-bucket
  2. B. http://west-bucket.s3-us-west-2.amazonaws.com/
  3. C. http://west-bucket.s3-website-us-west-2.amazonaws.com/
  4. D. http:// s3-website-us-west-2.amazonaws.com/west-bucket
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

James hosts his personal blog website as static website on S3. The bucket name he uses to store his website files is ‘eu-bucket’ in ‘eu-central-1’ region. The photos are uploaded under the main bucket folder using the S3 console. What will be url of James’ static website?

  1. A. http:// s3- eu-central-1.amazonaws.com/eu-bucket
  2. B. http://eu-bucket.s3-website. eu-central-1.amazonaws.com/
  3. C. http://eu-bucket.s3-website-eu-central-1.amazonaws.com/
  4. D. http:// s3-website- eu-central-1amazonaws.com/eu-bucket
A
  1. A. http:// s3- eu-central-1.amazonaws.com/eu-bucket
  2. B. http://eu-bucket.s3-website. eu-central-1.amazonaws.com/
  3. C. http://eu-bucket.s3-website-eu-central-1.amazonaws.com/
  4. D. http:// s3-website- eu-central-1amazonaws.com/eu-bucket
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You are an architect who has been tasked to build a static website using S3. What are the essential prerequisite steps? Choose 2.

  1. Register a custom domain name in Route 53.
  2. Configure the bucket’s property for static website hosting with an index, error file and redirection rule.
  3. Enable HTTP on the bucket.
  4. Ensure that bucket and its objects must have public read access.
A
  1. Register a custom domain name in Route 53.
  2. Configure the bucket’s property for static website hosting with an index, error file and redirection rule.
  3. Enable HTTP on the bucket.
  4. Ensure that bucket and its objects must have public read access.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which S3 storage class is not designed to be resilient to simultaneous complete data loss in a single Availability Zone and partial loss in another Availability Zone?

  1. STANDARD_IA
  2. ONEZONE_IA
  3. INTELLIGENT_TIERING
  4. DEEP_ARCHIVE
A
  1. STANDARD_IA
  2. ONEZONE_IA
  3. INTELLIGENT_TIERING
  4. DEEP_ARCHIVE
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which S3 storage class are designed for long-lived and infrequently accessed data? Choose 2.

  1. STANDARD_IA
  2. ONEZONE_IA
  3. GLACIER
  4. DEEP_ARCHIVE
A
  1. STANDARD_IA
  2. ONEZONE_IA
  3. GLACIER
  4. DEEP_ARCHIVE
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The GLACIER and DEEP_ARCHIVE storage classes offer the same durability and resiliency as the STANDARD storage class.

  1. True
  2. False
A
  1. True
  2. False
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the benefits of AWS Storage Gateway? Choose 3.

  1. Hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage.
  2. You can use the service for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration.
  3. In premise solution for enhancing your company data center storage capability without connecting to AWS cloud storage.
  4. Your applications connect to the service through a virtual machine or hardware gateway appliance using standard storage protocols, such as NFS, SMB and iSCSI.
A
  1. Hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage.
  2. You can use the service for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration.
  3. In premise solution for enhancing your company data center storage capability without connecting to AWS cloud storage.
  4. Your applications connect to the service through a virtual machine or hardware gateway appliance using standard storage protocols, such as NFS, SMB and iSCSI.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the three storage interfaces supported by AWS Storage Gateway?

  1. File Gateway
  2. Volume Gateway
  3. Tape Gateway
  4. Network Gateway
A
  1. File Gateway
  2. Volume Gateway
  3. Tape Gateway
  4. Network Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the minimum file size that can be store in S3?

  1. 1 Byte
  2. 0 Byte
  3. 1 KB
  4. 1 MB
A
  1. 1 Byte
  2. 0 Byte
  3. 1 KB
  4. 1 MB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the largest object size that can be uploaded to S3 in a single PUT?

  1. 5GB
  2. 5TB
  3. 5MB
  4. 5KB
A
  1. 5GB
  2. 5TB
  3. 5MB
  4. 5KB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the maximum file size that can be stored on S3?

  1. 5GB
  2. 5TB
  3. 5MB
  4. 5KB
A
  1. 5GB
  2. 5TB
  3. 5MB
  4. 5KB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A law firm has an internal tablet/mobile application used by employees to download large word documents in their devices for offline review. These document’s size are in the range of 10-20 MB. The employees save the document in local device storage, edit it in offline mode and then use the feature in app to upload file to cloud storage. Most of the time users are expected to be in area of high mobile bandwidth of LTE or WIFI but some time they may be in area using a slow speed network (EDGE) or 3G with lots of fluctuations. The files are stored in AWS S3 buckets. What approach should the architect recommend for file upload in application?

  1. Use Single PUT operation to upload the files to S3
  2. Use Multipart upload to upload the files to S3
  3. Use Amazon S3 Transfer Acceleration to upload the files
  4. Use Single POST operation to upload the files to S3
A
  1. Use Single PUT operation to upload the files to S3
  2. Use Multipart upload to upload the files to S3
  3. Use Amazon S3 Transfer Acceleration to upload the files
  4. Use Single POST operation to upload the files to S3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the recommended scenarios to use multipart uploading to S3? Choose 2.

  1. If you’re uploading any size objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
  2. If you’re uploading large objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
  3. If you’re uploading over a stable high-bandwidth network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
  4. If you’re uploading over a spotty network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
A
  1. If you’re uploading any size objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
  2. If you’re uploading large objects over a stable high-bandwidth network, use multipart uploading to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
  3. If you’re uploading over a stable high-bandwidth network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
  4. If you’re uploading over a spotty network, use multipart uploading to increase resiliency to network errors by avoiding upload restarts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is S3 transfer acceleration? Choose 2.

  1. Enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket.
  2. Enables fast, easy, and secure transfers of files over short distances between your client and your Amazon S3 bucket.
  3. Leverages Amazon CloudFront’s globally distributed AWS Edge Locations.
  4. Leverages Amazon CloudFront’s regionally distributed AWS Edge Locations.
A
  1. Enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket.
  2. Enables fast, easy, and secure transfers of files over short distances between your client and your Amazon S3 bucket.
  3. Leverages Amazon CloudFront’s globally distributed AWS Edge Locations.
  4. Leverages Amazon CloudFront’s regionally distributed AWS Edge Locations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You have designed an intranet web application for your employees to upload files to S3 buckets for archive. One of employee is trying to upload a 6 GB file to S3 but keep getting the following AWS error message “Your proposed upload exceeds the maximum allowed object size.”. What can be the possible reason?

  1. Your intranet firewall is not allowing upload of that object size.
  2. Your browser is not allowing upload of that object size.
  3. Maximum size of object that can be uploaded to S3 in single PUT operation is 5 GB.
  4. The S3 bucket cannot store object of that size.
A
  1. Your intranet firewall is not allowing upload of that object size.
  2. Your browser is not allowing upload of that object size.
  3. Maximum size of object that can be uploaded to S3 in single PUT operation is 5 GB.
  4. The S3 bucket cannot store object of that size.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

In general, at what object size AWS recommends using multipart uploads instead of uploading the object in a single operation?

  1. 5 MB
  2. 50 MB
  3. 100 MB
  4. 5 GB
A
  1. 5 MB
  2. 50 MB
  3. 100 MB
  4. 5 GB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the reasons to use S3 Transfer acceleration? Choose 2.

  1. Applications that upload to a centralized bucket from all over the world.
  2. Transfer gigabytes to terabytes of data on a regular basis across continents.
  3. To improve application performance
  4. To improve snapshot copy of EC2 EBS volume.
A
  1. Applications that upload to a centralized bucket from all over the world.
  2. Transfer gigabytes to terabytes of data on a regular basis across continents.
  3. To improve application performance
  4. To improve snapshot copy of EC2 EBS volume.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Amazon EBS provide which type of storage?

  1. Block based Storage
  2. Object based Storage
  3. Magnetic Storage
  4. File Storage
A
  1. Block based Storage
  2. Object based Storage
  3. Magnetic Storage
  4. File Storage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Your company is planning to store their important documents in S3 storage. The compliance unit wants to be intimated when documents are created or deleted along with the user name. You know that S3 has the feature of event notification for object events like s3: ObjectCreated:*, s3: ObjectRemoved:*. What are the destination where S3 can publish events? Choose3.

  1. Amazon SES
  2. Amazon Simple Notification Service (Amazon SNS) topic
  3. Amazon Simple Queue Service (Amazon SQS) queue
  4. AWS Lambda
A
  1. Amazon SES
  2. Amazon Simple Notification Service (Amazon SNS) topic
  3. Amazon Simple Queue Service (Amazon SQS) queue
  4. AWS Lambda
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

You want to host your own cloud blog website with custom domain as “www. Mycloudblog.com” as static website using S3. What are the essential prerequisite steps? Choose 4.

  1. Register the custom domain name in Route 53. Create the alias records that you add to the hosted zone for your domain name.
  2. Configure the bucket’s property for static website hosting with an index, error file and redirection rule.
  3. The bucket names must match the names of the website that you are hosting.
  4. Enable HTTP on the bucket
  5. Ensure that bucket and its objects must have public read access
A
  1. Register the custom domain name in Route 53. Create the alias records that you add to the hosted zone for your domain name.
  2. Configure the bucket’s property for static website hosting with an index, error file and redirection rule.
  3. The bucket names must match the names of the website that you are hosting.
  4. Enable HTTP on the bucket
  5. Ensure that bucket and its objects must have public read access
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are the benefit of using versioning in S3? Choose 2.

  1. To restrict access to bucket.
  2. To preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket.
  3. To encrypt objects stored in the bucket.
  4. To recover from both unintended user actions and application failures.
A
  1. To restrict access to bucket.
  2. To preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket.
  3. To encrypt objects stored in the bucket.
  4. To recover from both unintended user actions and application failures.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is the version id of the stored objects before the version is enabled on the bucket?

  1. 111111
  2. 222222
  3. 999999
  4. Null
A
  1. 111111
  2. 222222
  3. 999999
  4. Null
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How does versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite? Choose 2.

  1. If you delete or overwrite an object AWS keeps a copy in the archive folder.
  2. If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.
  3. If you overwrite an object, it results in a new object version in the bucket.
  4. If you delete or overwrite an object AWS emails you a copy of the previous version.
A
  1. If you delete or overwrite an object AWS keeps a copy in the archive folder.
  2. If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.
  3. If you overwrite an object, it results in a new object version in the bucket.
  4. If you delete or overwrite an object AWS emails you a copy of the previous version.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Choose the statements that are true? Choose 3.

  1. Buckets can be in one of three states: unversioned (the default), versioning-enabled, or versioning-suspended.
  2. Buckets can be in one of two states: unversioned (the default) or versioning-enabled.
  3. Once you version-enable a bucket, it can never return to an unversioned state.
  4. Once you version-enable a bucket, it can return to an unversioned state.
  5. Once you version-enable a bucket, you can only suspend versioning on that bucket.
A
  1. Buckets can be in one of three states: unversioned (the default), versioning-enabled, or versioning-suspended.
  2. Buckets can be in one of two states: unversioned (the default) or versioning-enabled.
  3. Once you version-enable a bucket, it can never return to an unversioned state.
  4. Once you version-enable a bucket, it can return to an unversioned state.
  5. Once you version-enable a bucket, you can only suspend versioning on that bucket.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Your company stores customer contract documents in S3. One of the Account Manager deleted the signed contracts of his accounts. As a result of this you have been asked to configure S3 storage in such a way that files can be protected against inadvertent or intentional deletion. How will you configure the S3? Choose 2.

  1. Enable Versioning on the bucket.
  2. Write a lambda program which copies the file in another backup bucket.
  3. Enable MFA delete on the bucket.
  4. Use lifecycle policy which copies the data after POST/UPDATE into another bucket.
  5. Use cross region replication which copies the data after POST/UPDATE into another bucket.
A
  1. Enable Versioning on the bucket.
  2. Write a lambda program which copies the file in another backup bucket.
  3. Enable MFA delete on the bucket.
  4. Use lifecycle policy which copies the data after POST/UPDATE into another bucket.
  5. Use cross region replication which copies the data after POST/UPDATE into another bucket.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is S3 cross region replication?

  1. Enables automatic, synchronous copying of objects across buckets in same AWS Regions
  2. Enables automatic, synchronous copying of objects across buckets in different AWS Regions
  3. Enables automatic, asynchronous copying of objects across buckets in different AWS Regions
  4. Enables automatic, asynchronous copying of objects across buckets in same AWS Regions
A
  1. Enables automatic, synchronous copying of objects across buckets in same AWS Regions
  2. Enables automatic, synchronous copying of objects across buckets in different AWS Regions
  3. Enables automatic, asynchronous copying of objects across buckets in different AWS Regions
  4. Enables automatic, asynchronous copying of objects across buckets in same AWS Regions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What are the reasons to enable cross region replication on your S3 buckets?

  1. Comply with compliance requirements
  2. Minimize latency
  3. Increase operational efficiency
  4. Maintain object copies under different ownership
  5. All of the above
A
  1. Comply with compliance requirements
  2. Minimize latency
  3. Increase operational efficiency
  4. Maintain object copies under different ownership
  5. All of the above
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What are pre-requisite to enable Cross Region Replication? Choose 4.

  1. The source and destination bucket owner must have their respective source and destination AWS Regions enabled for their account.
  2. Both source and destination buckets must be in different region having their versioning enabled.
  3. Amazon S3 must have permissions to replicate objects from the source bucket to the destination bucket.
  4. If the owner of the source bucket doesn’t own the object in the bucket, the object owner must grant the bucket owner READ and READ_ACP permissions with the object access control list (ACL).
  5. Both source and destination buckets must be in same region having their versioning enabled
  6. Amazon S3 needs to have only one permission to read objects in the source bucket.
A
  1. The source and destination bucket owner must have their respective source and destination AWS Regions enabled for their account.
  2. Both source and destination buckets must be in different region having their versioning enabled.
  3. Amazon S3 must have permissions to replicate objects from the source bucket to the destination bucket.
  4. If the owner of the source bucket doesn’t own the object in the bucket, the object owner must grant the bucket owner READ and READ_ACP permissions with the object access control list (ACL).
  5. Both source and destination buckets must be in same region having their versioning enabled
  6. Amazon S3 needs to have only one permission to read objects in the source bucket.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is S3 Object expiration?

  1. When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it synchronously
  2. When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it asynchronously
  3. When an object reaches the end of its lifetime, Amazon S3 queues it for removal and moves it to DEEP_ARCHIVE
  4. When an object reaches the end of its lifetime, Amazon S3 queues it for removal and moves it to GLACIER
A
  1. When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it synchronously
  2. When an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it asynchronously
  3. When an object reaches the end of its lifetime, Amazon S3 queues it for removal and moves it to DEEP_ARCHIVE
  4. When an object reaches the end of its lifetime, Amazon S3 queues it for removal and moves it to GLACIER
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What isn’t replicated by default when you enable Cross Region Replication on your S3 bucket? Choose 3.

  1. Objects with file type .doc, .pdf, png
  2. Objects that existed before you added the replication configuration to the bucket.
  3. Objects in the source bucket that the bucket owner doesn’t have permissions for.
  4. Objects created with server-side encryption using customer-provided (SSE-C) encryption keys.
  5. Objects encrypted using Amazon S3 managed keys (SSE-S3)
A
  1. Objects with file type .doc, .pdf, png
  2. Objects that existed before you added the replication configuration to the bucket.
  3. Objects in the source bucket that the bucket owner doesn’t have permissions for.
  4. Objects created with server-side encryption using customer-provided (SSE-C) encryption keys.
  5. Objects encrypted using Amazon S3 managed keys (SSE-S3)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Suppose that you are a solution architect of a global company having regional headquarters in US-East, Ireland and Sydney. You have configured cross-region replication where bucket ‘myuseastbucket’ in ‘us-east-1’ US East (N. Virginia) region is the source and bucket ‘myeuwestbucket’ in ‘eu-west-1’ EU (Ireland) is the destination. Now you added another cross-region replication configuration where bucket ‘myeuwestbucket’ is the source and bucket ‘mysoutheastbucket’ in Asia Pacific (Sydney) ‘ap-southeast-2’ is the destination. You notice that file created in ‘myuseastbucket’ is getting replicated in ‘myeuwestbucket’ but not in ‘mysoutheastbucket’ ? What is the possible reason?

  1. You have not configured cross region replication for ‘myuseastbucket’ to mysoutheastbucket’
  2. Daisy Chain replication is not supported by S3.
  3. You have not called S3 support and get cross region replication to more than two destination.
  4. You have not given S3 permission to replicated objects to ‘mysoutheastbucket’
A
  1. You have not configured cross region replication for ‘myuseastbucket’ to mysoutheastbucket’
  2. Daisy Chain replication is not supported by S3.
  3. You have not called S3 support and get cross region replication to more than two destination.
  4. You have not given S3 permission to replicated objects to ‘mysoutheastbucket’
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What are the actions that can be configured in the S3 object lifecycle? Choose 2.

  1. Define when objects transition to another storage class.
  2. Define when objects expire.
  3. Define when object versioning is to be started.
  4. Define when object cross region replication is to be started.
A
  1. Define when objects transition to another storage class.
  2. Define when objects expire.
  3. Define when object versioning is to be started.
  4. Define when object cross region replication is to be started.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Which statements on Amazon S3 pricing are true? Choose 3.

  1. If you create a lifecycle expiration rule that causes objects that have been in INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE_IA storage for less than 30 days to expire, you are charged for 30 days
  2. You are always only charged for number of days objects are in the INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE
  3. If you create a lifecycle expiration rule that causes objects that have been in GLACIER storage for less than 90 days to expire, you are charged for 90 days.
  4. If you create a lifecycle expiration rule that causes objects that have been in DEEP_ARCHIVE storage for less than 180 days to expire, you are charged for 180 days.
A
  1. If you create a lifecycle expiration rule that causes objects that have been in INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE_IA storage for less than 30 days to expire, you are charged for 30 days
  2. You are always only charged for number of days objects are in the INTELLIGENT_TIERING, STANDARD_IA, or ONEZONE
  3. If you create a lifecycle expiration rule that causes objects that have been in GLACIER storage for less than 90 days to expire, you are charged for 90 days.
  4. If you create a lifecycle expiration rule that causes objects that have been in DEEP_ARCHIVE storage for less than 180 days to expire, you are charged for 180 days.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Which of the following lifecycle transitions between storage classes are supported? Choose 2.

  1. You can only transition from STANDARD to STANDARD_IA or ONEZONE_IA
  2. You can transition from the STANDARD storage class to any other storage class.
  3. You can only transition from STANDARD to the GLACIER or DEEP_ARCHIVE storage classes.
  4. You can transition from any storage class to the GLACIER or DEEP_ARCHIVE storage classes.
A
  1. You can only transition from STANDARD to STANDARD_IA or ONEZONE_IA
  2. You can transition from the STANDARD storage class to any other storage class.
  3. You can only transition from STANDARD to the GLACIER or DEEP_ARCHIVE storage classes.
  4. You can transition from any storage class to the GLACIER or DEEP_ARCHIVE storage classes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Which lifecycle transitions between storage classes are supported? Choose 3.

  1. You can transition from the STANDARD_IA storage class to the INTELLIGENT_TIERING or ONEZONE_IA storage classes.
  2. You can transition from any storage class to the STANDARD storage class.
  3. You can transition from the INTELLIGENT_TIERING storage class to the ONEZONE_IA storage class.
  4. You can transition from the DEEP_ARCHIVE storage class to any other storage class.
  5. You can transition from the GLACIER storage class to the DEEP_ARCHIVE storage class.
A
  1. You can transition from the STANDARD_IA storage class to the INTELLIGENT_TIERING or ONEZONE_IA storage classes.
  2. You can transition from any storage class to the STANDARD storage class.
  3. You can transition from the INTELLIGENT_TIERING storage class to the ONEZONE_IA storage class.
  4. You can transition from the DEEP_ARCHIVE storage class to any other storage class.
  5. You can transition from the GLACIER storage class to the DEEP_ARCHIVE storage class.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Which lifecycle transitions between storage classes are not supported? Choose 4.

  1. Transition from any storage class to the STANDARD storage class.
  2. Transition from the STANDARD storage class to any other storage class.
  3. Transition from the INTELLIGENT_TIERING storage class to the STANDARD_IA storage class.
  4. Transition from the ONEZONE_IA storage class to the STANDARD_IA or INTELLIGENT_TIERING storage classes.
  5. Transition from the DEEP_ARCHIVE storage class to any other storage class.
A
  1. Transition from any storage class to the STANDARD storage class.
  2. Transition from the STANDARD storage class to any other storage class.
  3. Transition from the INTELLIGENT_TIERING storage class to the STANDARD_IA storage class.
  4. Transition from the ONEZONE_IA storage class to the STANDARD_IA or INTELLIGENT_TIERING storage classes.
  5. Transition from the DEEP_ARCHIVE storage class to any other storage class.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

A manufacturing company has been using in-premise servers for storage. They have nearly used their installed storage capacity but don’t want to spend on adding new storage capacity to in-premise. They want to leverage AWS but don’t want to migrate their whole current in-premise data to cloud. Which AWS service can they use to achieve their requirement?

  1. Amazon S3
  2. Amazon EBS
  3. Amazon Storage Gateway
  4. Amazon RDS
A
  1. Amazon S3
  2. Amazon EBS
  3. Amazon Storage Gateway
  4. Amazon RDS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Which option of AWS Storage Gateway provides cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application servers?

  1. File Gateway
  2. Volume Gateway
  3. Tape Gateway
  4. iSCSI Gateway
A
  1. File Gateway
  2. Volume Gateway
  3. Tape Gateway
  4. iSCSI Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

You are creating a bucket with name ‘mybucket’ but you get an error message that ‘bucket name already exist’. You don’t have a bucket with same name nor you created a bucket with similar name earlier, what is the reason you are getting this error?

  1. S3 doesn’t allow you to create a bucket with name ‘mybucket’, it is reserved.
  2. You cannot have substring ‘bucket’ in you bucket name.
  3. Bucket names must be unique across all existing bucket names in Amazon S3.
  4. ‘mybucket’ is not a DNS-compliant bucket name
A
  1. S3 doesn’t allow you to create a bucket with name ‘mybucket’, it is reserved.
  2. You cannot have substring ‘bucket’ in you bucket name.
  3. Bucket names must be unique across all existing bucket names in Amazon S3.
  4. ‘mybucket’ is not a DNS-compliant bucket name
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Which option of AWS Storage Gateway provides you feature to store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB)?

  1. File Gateway
  2. Volume Gateway
  3. Tape Gateway
  4. iSCSI Gateway
A
  1. File Gateway
  2. Volume Gateway
  3. Tape Gateway
  4. iSCSI Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Which of the following statements are correct about Volume Gateway? Choose 2.

  1. Cached volumes store your data in Amazon S3 and retains a copy of frequently accessed data subsets locally.
  2. Stored volumes provides low-latency access to your entire dataset by storing all your data locally.
  3. Stored volumes store your data in Amazon S3 and retains a copy of frequently accessed data subsets locally.
  4. Cached volumes provides low-latency access to your entire dataset by storing all your data locally.
A
  1. Cached volumes store your data in Amazon S3 and retains a copy of frequently accessed data subsets locally.
  2. Stored volumes provides low-latency access to your entire dataset by storing all your data locally.
  3. Stored volumes store your data in Amazon S3 and retains a copy of frequently accessed data subsets locally.
  4. Cached volumes provides low-latency access to your entire dataset by storing all your data locally.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

What are the advantages provided by multipart upload? Choose 4.

  1. Ability to upload parts in parallel to improve throughput.
  2. Ability to begin an upload before knowing the object size.
  3. Ability to pause and resume the upload.
  4. Quick recovery from network issues.
  5. Ability to upload 10 MB to 5 GB, last part can be < 10 MB.
A
  1. Ability to upload parts in parallel to improve throughput.
  2. Ability to begin an upload before knowing the object size.
  3. Ability to pause and resume the upload.
  4. Quick recovery from network issues.
  5. Ability to upload 10 MB to 5 GB, last part can be < 10 MB.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What are the hosting options for AWS Storage Gateway? Choose 3.

  1. On-premises as a VM appliance
  2. Hardware appliance
  3. In AWS Elastic Beanstalk
  4. In AWS as an Amazon EC2 instance
A
  1. On-premises as a VM appliance
  2. Hardware appliance
  3. In AWS Elastic Beanstalk
  4. In AWS as an Amazon EC2 instance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Which of the following options are correct for File Storage Gateway? Choose 3.

  1. File gateway presents a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols.
  2. With file gateway, your configured S3 buckets will be available as Network File System (NFS) mount points or Server Message Block (SMB) file shares to your existing file-based applications or devices.
  3. With file gateway, your configured S3 buckets will be available as iSCSI shares to your existing file-based applications or devices.
  4. The gateway translates these file operations into object requests on your S3 buckets. Your most recently used data is cached on the gateway for low-latency access, and data transfer between your data center and AWS is fully managed and optimized by the gateway.
A
  1. File gateway presents a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols.
  2. With file gateway, your configured S3 buckets will be available as Network File System (NFS) mount points or Server Message Block (SMB) file shares to your existing file-based applications or devices.
  3. With file gateway, your configured S3 buckets will be available as iSCSI shares to your existing file-based applications or devices.
  4. The gateway translates these file operations into object requests on your S3 buckets. Your most recently used data is cached on the gateway for low-latency access, and data transfer between your data center and AWS is fully managed and optimized by the gateway.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Which S3 storage class is suitable for performance-sensitive use cases (those that require millisecond access time) and frequently accessed data?

  1. INTELLIGENT_TIERING
  2. STANDARD
  3. STANDARD-IA
  4. ONEZONE_IA
A
  1. INTELLIGENT_TIERING
  2. STANDARD
  3. STANDARD-IA
  4. ONEZONE_IA
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Your company is adopting AWS and want to minimize on-premises storage footprint, but need to retain on-premises access to storage for their existing apps. You would like to leverage AWS services as a way to replace on-premises storage with cloud-backed storage, which allows existing applications to operate without changes, while still getting the benefits of storing and processing this data in AWS. Which AWS service will be appropriate?

  1. Amazon S3
  2. Amazon RDS
  3. Amazon EBS
  4. Amazon Storage Gateway
A
  1. Amazon S3
  2. Amazon RDS
  3. Amazon EBS
  4. Amazon Storage Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Which of the following are applicable use cases for AWS Storage Gateway? Choose 3.

  1. Increase performance and reduce latency of on premise storage.
  2. Moving on-premises backups to AWS.
  3. Replace on-premises storage with cloud-backed storage, while allowing their existing applications to operate without changes, while still getting the benefits of storing and processing this data in AWS.
  4. Run apps in AWS and make the results available from multiple on-premises locations such as data centers or branch and remote offices. Also, customers that have moved their on-prem archives to AWS often want to make this data available for access from existing on-premises applications.
A
  1. Increase performance and reduce latency of on premise storage.
  2. Moving on-premises backups to AWS.
  3. Replace on-premises storage with cloud-backed storage, while allowing their existing applications to operate without changes, while still getting the benefits of storing and processing this data in AWS.
  4. Run apps in AWS and make the results available from multiple on-premises locations such as data centers or branch and remote offices. Also, customers that have moved their on-prem archives to AWS often want to make this data available for access from existing on-premises applications.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Which of the following server side encryption methods are supported in S3?

  1. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
  2. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
  3. Server-Side Encryption with Customer-Provided Keys (SSE-C)
  4. All of the above
A
  1. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
  2. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
  3. Server-Side Encryption with Customer-Provided Keys (SSE-C)
  4. All of the above
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What is a Tape Gateway? Choose 3.

  1. Cloud based Virtual Tape Library.
  2. Cloud based File and Object Library.
  3. Provides virtual tape library (VTL) interface for existing tape-based backup infrastructure to store data on virtual tape cartridges that you create on your tape gateway.
  4. After you deploy and activate a tape gateway, you mount the virtual tape drives and media changer on your on-premises application servers as iSCSI devices.
  5. After you deploy and activate a tape gateway, you mount the virtual tape drives and media changer on your on-premises application servers as File share.
A
  1. Cloud based Virtual Tape Library.
  2. Cloud based File and Object Library.
  3. Provides virtual tape library (VTL) interface for existing tape-based backup infrastructure to store data on virtual tape cartridges that you create on your tape gateway.
  4. After you deploy and activate a tape gateway, you mount the virtual tape drives and media changer on your on-premises application servers as iSCSI devices.
  5. After you deploy and activate a tape gateway, you mount the virtual tape drives and media changer on your on-premises application servers as File share.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Which of the following two statements are correct for appropriate use for STANDARD_IA and ONEZONE_IA?

  1. ONEZONE_IA —Use for your primary or only copy of data that can’t be recreated.
  2. STANDARD_IA—Use for your primary or only copy of data that can’t be recreated.
  3. ONEZONE_IA—Use if you can recreate the data if the Availability Zone fails, and for object replicas when setting cross-region replication (CRR).
  4. STANDARD_IA —Use if you can recreate the data if the Availability Zone fails, and for object replicas when setting cross-region replication (CRR).
A
  1. ONEZONE_IA —Use for your primary or only copy of data that can’t be recreated.
  2. STANDARD_IA—Use for your primary or only copy of data that can’t be recreated.
  3. ONEZONE_IA—Use if you can recreate the data if the Availability Zone fails, and for object replicas when setting cross-region replication (CRR).
  4. STANDARD_IA —Use if you can recreate the data if the Availability Zone fails, and for object replicas when setting cross-region replication (CRR).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

If you encrypt a bucket on S3, what type of encryption does AWS use?

  1. 1028-bit Advanced Encryption Standard (AES-1028)
  2. 256-bit Advanced Encryption Standard (AES-256)
  3. 128-bit Advanced Encryption Standard (AES-128)
  4. 192-bit Advanced Encryption Standard (AES-192)
A
  1. 1028-bit Advanced Encryption Standard (AES-1028)
  2. 256-bit Advanced Encryption Standard (AES-256)
  3. 128-bit Advanced Encryption Standard (AES-128)
  4. 192-bit Advanced Encryption Standard (AES-192)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Your company is exploring AWS Storage Gateway for extending their on-premise storage. One of the key criteria is to have AWS as the primary storage but still there should be fast and low latency access to frequently accessed data. Which Storage Gateway option will meet this criteria? Choose 2.

  1. Tape Gateway
  2. File Gateway
  3. Volume Stored Gateway
  4. Volume Cached Gateway
A
  1. Tape Gateway
  2. File Gateway
  3. Volume Stored Gateway
  4. Volume Cached Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

How can you protect data in transit to S3?

  1. Using an AWS KMS–Managed Customer Master Key (CMK)
  2. Using a Client-Side Master Key
  3. Using SSL between client and S3
  4. All of the above
A
  1. Using an AWS KMS–Managed Customer Master Key (CMK)
  2. Using a Client-Side Master Key
  3. Using SSL between client and S3
  4. All of the above
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

You have an object saved in S3 with attribute “color=yellow”. There are two applications ‘Client1’ and ‘Client 2’ which update the value of attribute to ‘Red’ and ‘Ruby’ one after another as shown in the figure below. Client 1 does a read operation ‘R1’ after the write W2 from client 2 and Client 2 does a read operation after the R1 as shown in the timeline in the figure below. What will be value of color for R1 read? Choose 3.

  1. For R1 the value of Color = Red
  2. For R1 the value of Color = Ruby
  3. For R1 the value of Color = Yellow
  4. For R1 the value of Color = Null
A
  1. For R1 the value of Color = Red
  2. For R1 the value of Color = Ruby
  3. For R1 the value of Color = Yellow
  4. For R1 the value of Color = Null
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

How can you protect data at rest in S3? Choose 2.

  1. Using server side encryption
  2. Using client side encryption
  3. Using SSL between client and S3
A
  1. Using server side encryption
  2. Using client side encryption
  3. Using SSL between client and S3
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

What is Amazon S3 Block Public Access feature? Choose 3.

  1. With S3 Block Public Access, account administrators and bucket owners can easily set up centralized controls to limit public access to their Amazon S3 resources that are enforced regardless of how the resources are created.
  2. You can enable block public access settings only for access points, buckets, and AWS accounts.
  3. Amazon S3 evaluates whether an operation is prohibited by a block public access setting, it rejects any request that violates setting.
  4. You can enable block public access settings only for objects, buckets, and AWS accounts.
A
  1. With S3 Block Public Access, account administrators and bucket owners can easily set up centralized controls to limit public access to their Amazon S3 resources that are enforced regardless of how the resources are created.
  2. You can enable block public access settings only for access points, buckets, and AWS accounts.
  3. Amazon S3 evaluates whether an operation is prohibited by a block public access setting, it rejects any request that violates setting.
  4. You can enable block public access settings only for objects, buckets, and AWS accounts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

You have a S3 bucket named Photos with versioning enabled. You do following steps:

  • PUT a new object photo.gif which gets version ID = 111111
  • PUT a new version of photo.gif.
  • DELETE photo.gif

Which of the following two statements are correct?

  1. After Step2, Amazon S3 generates a new version ID (121212), and adds the newer version to the bucket retaining the older version with ID=111111.There is two versions of photo.gif.
  2. After Step2, Amazon S3 overwrites the older version with ID=111111 and grants it new ID. There is only one version of photo.gif.
  3. After Step 3, when you DELETE an object, all versions remain in the bucket and Amazon S3 inserts a delete marker.
  4. After Step 3, when you DELETE an object, all versions are deleted from the bucket.
A
  1. After Step2, Amazon S3 generates a new version ID (121212), and adds the newer version to the bucket retaining the older version with ID=111111.There is two versions of photo.gif.
  2. After Step2, Amazon S3 overwrites the older version with ID=111111 and grants it new ID. There is only one version of photo.gif.
  3. After Step 3, when you DELETE an object, all versions remain in the bucket and Amazon S3 inserts a delete marker.
  4. After Step 3, when you DELETE an object, all versions are deleted from the bucket.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

As a solution architect you want to ensure that Amazon Simple Storage Service (Amazon S3) buckets and objects are secure. The resources that needs to be private must be private. What are the ways to limit permission to Amazon S3 resources? Choose 4.

  1. Writing AWS Identity and Access Management (IAM) user policies that specify the users that can access specific buckets and objects.
  2. Writing bucket policies that define access to specific buckets and objects.
  3. Using Client side encryption
  4. Using Amazon S3 Block Public Access as a centralized way to limit public access.
  5. Using server side encryption
  6. Setting access control lists (ACLs) on your buckets and objects.
A
  1. Writing AWS Identity and Access Management (IAM) user policies that specify the users that can access specific buckets and objects.
  2. Writing bucket policies that define access to specific buckets and objects.
  3. Using Client side encryption
  4. Using Amazon S3 Block Public Access as a centralized way to limit public access.
  5. Using server side encryption
  6. Setting access control lists (ACLs) on your buckets and objects.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

When should you use an ACL-based Access Policy (Bucket and Object ACLs)? Choose 4.

  1. When an object ACL is the only way to manage access to objects not owned by the bucket owner.
  2. When permissions vary by object and you need to manage permissions at the object level.
  3. When you want to define permission at object level.
  4. To grant write permission to the Amazon S3 Log Delivery group to write access log objects to your bucket.
  5. When the AWS account that owns the object also owns the bucket and you need to manage object permissions.
A
  1. When an object ACL is the only way to manage access to objects not owned by the bucket owner.
  2. When permissions vary by object and you need to manage permissions at the object level.
  3. When you want to define permission at object level.
  4. To grant write permission to the Amazon S3 Log Delivery group to write access log objects to your bucket.
  5. When the AWS account that owns the object also owns the bucket and you need to manage object permissions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

You have a S3 bucket named Photos with versioning enabled. You do following steps:

  • PUT a new object photo.gif which gets version ID = 111111
  • PUT a new version of photo.gif get version ID=222222
  • DELETE photo.gif. Delete marker with version ID = 456789
  • GET object

Which of the following two statements are correct?

  1. GET object will return object with version ID = 111111
  2. GET object will return object with version ID = 222222
  3. GET Object returns a 404 not found error.
  4. GET object will return delete marker object with version ID = 456789
A
  1. GET object will return object with version ID = 111111
  2. GET object will return object with version ID = 222222
  3. GET Object returns a 404 not found error.
  4. GET object will return delete marker object with version ID = 456789
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

By default all S3 buckets are public and can be accessed only by users that are explicitly granted access.

  1. True
  2. False
A
  1. True
  2. False
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

You have a S3 bucket named Photos with versioning enabled. You do following steps:

  • PUT a new object photo.gif which gets version ID = 111111
  • PUT a new version of photo.gif get version ID=222222
  • DELETE photo.gif. Delete marker with version ID = 456789

Which of the following two statements are correct?

  1. You can GET a specific object version.
  2. You can permanently delete a specific object by specifying the version you want to delete.
  3. You can permanently delete only latest version of object. version ID=222222
  4. You can GET only latest version of object. version ID=222222
A
  1. You can GET a specific object version.
  2. You can permanently delete a specific object by specifying the version you want to delete.
  3. You can permanently delete only latest version of object. version ID=222222
  4. You can GET only latest version of object. version ID=222222
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

In which of the following use case you will not use S3 lifecycle configurations?

  1. If you upload periodic logs to a bucket, your application might need them for a week or a month. After that, you might want to delete them.
  2. Some documents are frequently accessed for a limited period of time. After that, they are infrequently accessed. At some point, you might not need real-time access to them, but your organization or regulations might require you to archive them for a specific period. After that, you can delete them.
  3. Daily upload of data from regional offices to a central bucket for ETL processing.
  4. You might upload some types of data to Amazon S3 primarily for archival purposes, long-term database backups, and data that must be retained for regulatory compliance.
A
  1. If you upload periodic logs to a bucket, your application might need them for a week or a month. After that, you might want to delete them.
  2. Some documents are frequently accessed for a limited period of time. After that, they are infrequently accessed. At some point, you might not need real-time access to them, but your organization or regulations might require you to archive them for a specific period. After that, you can delete them.
  3. Daily upload of data from regional offices to a central bucket for ETL processing.
  4. You might upload some types of data to Amazon S3 primarily for archival purposes, long-term database backups, and data that must be retained for regulatory compliance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

When should you choose IAM policies for S3 permissions? Choose 3.

  1. You prefer to keep access control policies in the IAM environment not only for S3 but for other AWS resources
  2. If you’re more interested in “What can this user do in AWS?”
  3. If you’re more interested in “Who can access this S3 bucket?”
  4. You have numerous S3 buckets each with different permissions requirements.
A
  1. You prefer to keep access control policies in the IAM environment not only for S3 but for other AWS resources
  2. If you’re more interested in “What can this user do in AWS?”
  3. If you’re more interested in “Who can access this S3 bucket?”
  4. You have numerous S3 buckets each with different permissions requirements.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

In order to determine whether the requester has permission to perform the specific operation, put in order following steps which Amazon S3 does when it receives a request?

  1. Converts all the relevant access policies (user policy, bucket policy, ACLs) at run time into a set of policies for evaluation.
  2. Object context – If the request is for an object, Amazon S3 evaluates the subset of policies owned by the object owner.
  3. User context – Amazon S3 evaluates a subset of policies owned by the parent account.
  4. Bucket context – In the bucket context, Amazon S3 evaluates policies owned by the AWS account that owns the bucket.
  5. 1,2,3,4
  6. 2,3,4,1
  7. 3,4,1,2
  8. 1,3,4,2
A
  1. 1,2,3,4
  2. 2,3,4,1
  3. 3,4,1,2
  4. 1,3,4,2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

How S3 evaluates request for a bucket operation requested by an IAM Principal whose parent AWS account is also the bucket owner? Principal: Jill

Jill’s Parent Account: 1111-1111-1111

In the user context Amazon S3 evaluates all policies that belongs to the parent AWS account to determine if Principal has permission to perform the operation.

  1. Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation.
  2. Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and last object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
  3. Amazon S3 evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and in object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
A
  1. Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation.
  2. Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and last object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
  3. Amazon S3 evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and in object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

When should you choose Bucket policy for S3 permissions? Choose 4.

  1. If you’re more interested in “What can this user do in AWS?”
  2. If you’re more interested in “Who can access this S3 bucket?”
  3. You want a simple way to grant cross-account access to your S3 environment, without using IAM roles.
  4. Your IAM policies are reaching the size limit (up to 2 kb for users, 5 kb for groups, and 10 kb for roles). S3 supports bucket policies of up 20 kb.
  5. You prefer to keep access control policies in the S3 environment.
A
  1. If you’re more interested in “What can this user do in AWS?”
  2. If you’re more interested in “Who can access this S3 bucket?”
  3. You want a simple way to grant cross-account access to your S3 environment, without using IAM roles.
  4. Your IAM policies are reaching the size limit (up to 2 kb for users, 5 kb for groups, and 10 kb for roles). S3 supports bucket policies of up 20 kb.
  5. You prefer to keep access control policies in the S3 environment.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

How S3 evaluates request for a bucket operation requested by an IAM Principal whose parent AWS account is not the bucket owner?

  • Principal: Jill
  • Jill’s Parent Account: 1111-1111-1111
  • Bucket Owner: 2222-2222-2222
  1. In the user context Amazon S3 evaluates all policies that belong to the parent AWS account to determine if Principal has permission to perform the operation.
  2. Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation.
  3. Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and last object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
  4. Amazon S3 evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and in object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
A
  1. In the user context Amazon S3 evaluates all policies that belong to the parent AWS account to determine if Principal has permission to perform the operation.
  2. Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation.
  3. Amazon S3 evaluates the user context by reviewing the policies authored by the account to verify that Principal has the necessary permissions, then it evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and last object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
  4. Amazon S3 evaluates the bucket context, to verify that bucket owner has granted Jill (or her parent AWS account) permission to perform the requested operation and in object context evaluates the object ACL to determine if Jill has permission to access the objects in the bucket.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Which of the following statements are correct about S3 ACLs? Choose 4.

  1. Resource-based access policy options that you can use to manage access to your buckets and objects.
  2. You can grant permissions only to other AWS accounts; you cannot grant permissions to users in your account.
  3. A grantee can be an AWS account or IAM user.
  4. You cannot grant conditional permissions, nor can you explicitly deny permissions.
  5. A grantee can be an AWS account or one of the predefined Amazon S3 groups.
A
  1. Resource-based access policy options that you can use to manage access to your buckets and objects.
  2. You can grant permissions only to other AWS accounts; you cannot grant permissions to users in your account.
  3. A grantee can be an AWS account or IAM user.
  4. You cannot grant conditional permissions, nor can you explicitly deny permissions.
  5. A grantee can be an AWS account or one of the predefined Amazon S3 groups.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

A building architecture company stores all its project architecture documents in S3. As an added security measure they want to allow access to S3 only from their corporate network ip addresses. How this can be achieved?

  1. Create bucket policies with Action=Allow and the condition block element IpAddress having values for the corporate domain ip address.
  2. Create IAM policy with Action=Deny the condition block element if NotIpAddress having values for the corporate domain ip address.
  3. Create IAM policy with Action=Allow the condition block element if IpAddress having values for the corporate domain ip address.
  4. All of the above
A
  1. Create bucket policies with Action=Allow and the condition block element IpAddress having values for the corporate domain ip address.
  2. Create IAM policy with Action=Deny the condition block element if NotIpAddress having values for the corporate domain ip address.
  3. Create IAM policy with Action=Allow the condition block element if IpAddress having values for the corporate domain ip address.
  4. All of the above
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

A pharmaceutical company has an on-premise analytics application which has 100 GB of data. They don’t want to invest in extending the on-premise storage but want to leverage the AWS cloud storage without making considerable changes to the analytics application. They also want to have low latency access to data of last one month which is more frequently used and should be stored in-premise. Which storage option will you use?

  1. Amazon RDS
  2. Amazon Volume Storage Gateway Cached
  3. Amazon EBS
  4. Amazon S3
  5. Amazon Volume Storage Gateway Stored
A
  1. Amazon RDS
  2. Amazon Volume Storage Gateway Cached
  3. Amazon EBS
  4. Amazon S3
  5. Amazon Volume Storage Gateway Stored
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

You are using S3 bucket for data backup of on-premise data. You have created a lifecycle policy to Transition the data from Standard storage class to Standard IA 3 days after data is created in S3 bucket. If you have uploaded a file to backup S3 folder on 1/15/2020 10.30 AM UTC when will S3 transition it to Standard IA storage class?

  1. 1/18/2020 10.30 AM UTC
  2. 1/18/2020 10.30 PM UTC
  3. 1/19/2020 00:00 UTC
  4. 1/18/2020 00:00 UTC
A
  1. 1/18/2020 10.30 AM UTC
  2. 1/18/2020 10.30 PM UTC
  3. 1/19/2020 00:00 UTC
  4. 1/18/2020 00:00 UTC
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

You are a solution architect having your own website on wildlife videography. You have uploaded videos from your recent visit to Brazil’s amazon forest on the website. In the backend you store these videos in a S3 folder which is not publically accessible. You want to ensure that these videos can be downloaded only by registered users of your website. How can you do this?

  1. Make the S3 folder publically accessible
  2. Attach a bucket policy to the folder so that it is accessible by the registered users
  3. Generate a pre-signed URL to grant time-limited permission to download the video file
  4. Create IAM users for the users registered in the website and give access to S3 bucket
A
  1. Make the S3 folder publically accessible
  2. Attach a bucket policy to the folder so that it is accessible by the registered users
  3. Generate a pre-signed URL to grant time-limited permission to download the video file
  4. Create IAM users for the users registered in the website and give access to S3 bucket
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Your company has decided to start their journey to cloud by moving secondary workloads, such as backups and archives. They want to migrate back up on-premises data currently being stored on physical tapes without changing their current backup workflows or backup applications. As a cloud migration consultant what is the strategy you will adopt?

  1. Use AWS Tape Gateway.
  2. Use a third party software to convert data in tape to a block storage for storing in in premise EFS.
  3. Use a third party software to convert data in tape to an object storage for uploading to S3.
  4. Use AWS File Gateway.
  5. Use AWS Volume Cached Gateway
A
  1. Use AWS Tape Gateway.
  2. Use a third party software to convert data in tape to a block storage for storing in in premise EFS.
  3. Use a third party software to convert data in tape to an object storage for uploading to S3.
  4. Use AWS File Gateway.
  5. Use AWS Volume Cached Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

You are the solution architect for a pharmaceutical company which has been using a client application to manage their on-premise data backup and archival. The application uses iSCSI protocol to transfer data between application and on-premise storage. The on-premise storage currently store TBs of data and is reaching near capacity. The company doesn’t want to invest in expanding the on-premise storage capacity. Which AWS service company should leverage so that there is minimum or no change to existing backup & archiving application as well as low latency is provided for frequently used data?

  1. Use AWS Tape Gateway.
  2. Use AWS Volume Storage Gateway.
  3. Use AWS File Gateway.
  4. Use AWS Volume Cached Gateway
A
  1. Use AWS Tape Gateway.
  2. Use AWS Volume Storage Gateway.
  3. Use AWS File Gateway.
  4. Use AWS Volume Cached Gateway
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Which S3 storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without performance impact or operational overhead?

  1. INTELLIGENT_TIERING
  2. STANDARD
  3. STANDARD-IA
  4. ONEZONE_IA
A
  1. INTELLIGENT_TIERING
  2. STANDARD
  3. STANDARD-IA
  4. ONEZONE_IA
87
Q

You are the solution architect for pharmaceutical company which has been using a client application to manage their in-premise data backup and archival. The application uses iSCSI protocol to transfer data between application and in-premise storage. The on-premise storage currently store TBs of data and is reaching near capacity. The company doesn’t want to invest in expanding the on-premise storage capacity. Which AWS service should company leverage so that there is minimum or no change to existing backup & archiving application as well low latency is provided for all data using cloud as secondary storage?

  1. Use AWS Tape Gateway.
  2. Use AWS Volume Stored Gateway.
  3. Use AWS File Gateway.
  4. Use AWS Volume Cached Gateway
A
  1. Use AWS Tape Gateway.
  2. Use AWS Volume Stored Gateway.
  3. Use AWS File Gateway.
  4. Use AWS Volume Cached Gateway
88
Q

Suppose that for a S3 bucket you have created a lifecycle rule and specified a date-based Expiration action to delete all objects. Select three correct statements from following.

  1. On the specified date, S3 expires all the qualified objects in the bucket.
  2. S3 will expire subsequent new objects created in the bucket.
  3. S3 continues to apply the date-based action even after the date has passed, as long as the rule status is Enabled.
  4. S3 will apply the date based rule only for that day on the existing and new objects created till 11:59:59 PM.
A
  1. On the specified date, S3 expires all the qualified objects in the bucket.
  2. S3 will expire subsequent new objects created in the bucket.
  3. S3 continues to apply the date-based action even after the date has passed, as long as the rule status is Enabled.
  4. S3 will apply the date based rule only for that day on the existing and new objects created till 11:59:59 PM.
89
Q

Which S3 storage class are designed for low cost data archiving? Choose 2.

  1. STANDARD_IA
  2. ONEZONE_IA
  3. GLACIER
  4. DEEP_ARCHIVE
A
  1. STANDARD_IA
  2. ONEZONE_IA
  3. GLACIER
  4. DEEP_ARCHIVE
90
Q

You have created a static blog website using S3. The name of the bucket is ‘mycloudblogs.com’ created in us-west-2 region. The website is available at the following Amazon S3 website endpoint:

  • http://mycloudblogs.com.s3-website-us-west-2.amazonaws.com/

Your website also has JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket:

  • mycloudblogs.com.s3-us-west-2.amazonaws.com

You have also created the alias record for mycloudblogs.com in Route 53 so that your user can access the website by using the url http://mycloudblogs.com. When you tested the website by invoking the website endpoint url on your browser you are getting following error: ‘No ‘Access-Control-Allow-Origin’ header is present on the requested resource’. What could be the reason?

  1. You need to pass a unique header value from browser to Amazon S3 for every request.
  2. You need to pass a unique header value from Amazon S3 to browser for every request.
  3. Need to configure your CORS Settings for your bucket on amazon S3 console.
  4. Need to configure your CORS Settings for your bucket in Route 53 record.
A
  1. You need to pass a unique header value from browser to Amazon S3 for every request.
  2. You need to pass a unique header value from Amazon S3 to browser for every request.
  3. Need to configure your CORS Settings for your bucket on amazon S3 console.
  4. Need to configure your CORS Settings for your bucket in Route 53 record.
91
Q

You are the solution architect for financial services company who has been using a client application to manage their on-premise data backup and archival. The application uses NFS or SMB protocol to transfer data between application and on-premise storage. The on-premise storage currently store TBs of data and is reaching near capacity. The company doesn’t want to invest in expanding the in-premise storage capacity. Which AWS service company should leverage so that there is minimum or no change to existing backup & archiving application as well low latency is provided for frequently used data?

  1. Use AWS Tape Gateway.
  2. Use Volume Storage Gateway.
  3. Use AWS File Gateway.
  4. Use AWS Volume Cached Gateway
A
  1. Use AWS Tape Gateway.
  2. Use Volume Storage Gateway.
  3. Use AWS File Gateway.
  4. Use AWS Volume Cached Gateway
92
Q

A company wants to use S3 to store the paid invoices by its customers. These paid invoices are accessed by various departments from finance, sales, and department heads and customer representatives for 30 days. The invoices that are paid more than 30 days before are infrequently accessed only by accounting department for auditing purpose. After the financial year these invoices are rarely accessed by any one and even if accessed, fast retrieval is not a consideration. How the solution architect of the company should plan on using different storage tiers in most cost effective way?

  1. Use STANDARD tier for storing paid invoice for first 30 days. Configure lifecycle rule to move the invoice to ONEZONE_IA after 30 days and to GLACIER after the financial year is over.
  2. Use STANDARD tier for storing paid invoice for first 30 days. Configure lifecycle rule to move the invoice to STANDARD _IA after 30 days and to GLACIER after the financial year is over.
  3. Use STANDARD tier for storing paid invoice for first 30 days. Configure lifecycle rule to move the invoice to STANDARD _IA after 30 days and to DEEP_ARCHIVE after the financial year is over.
  4. Use STANDARD tier for storing paid invoice for first 30 days. Configure lifecycle rule to move the invoice to STANDARD _IA after 30 days and to DEEP_ARCHIVE after the financial year is over.
A
  1. Use STANDARD tier for storing paid invoice for first 30 days. Configure lifecycle rule to move the invoice to ONEZONE_IA after 30 days and to GLACIER after the financial year is over.
  2. Use STANDARD tier for storing paid invoice for first 30 days. Configure lifecycle rule to move the invoice to STANDARD _IA after 30 days and to GLACIER after the financial year is over.
  3. Use STANDARD tier for storing paid invoice for first 30 days. Configure lifecycle rule to move the invoice to STANDARD _IA after 30 days and to DEEP_ARCHIVE after the financial year is over.
  4. Use STANDARD tier for storing paid invoice for first 30 days. Configure lifecycle rule to move the invoice to STANDARD _IA after 30 days and to DEEP_ARCHIVE after the financial year is over.
93
Q

Which of the following two statements are correct about data archiving storage classes in S3? Choose 2.

  1. GLACIER—used for archives where portions of the data might need to be retrieved in minutes.
  2. DEEP_ARCHIVE —used for archives where portions of the data might need to be retrieved in minutes.
  3. GLACIER —Use for archiving data that rarely needs to be accessed.
  4. DEEP_ARCHIVE—Use for archiving data that rarely needs to be accessed.
A
  1. GLACIER—used for archives where portions of the data might need to be retrieved in minutes.
  2. DEEP_ARCHIVE —used for archives where portions of the data might need to be retrieved in minutes.
  3. GLACIER —Use for archiving data that rarely needs to be accessed.
  4. DEEP_ARCHIVE—Use for archiving data that rarely needs to be accessed.
94
Q

Which of the following two statements are correct about data archiving storage classes in S3? Choose 2.

  1. Objects that are stored in the GLACIER or DEEP_ARCHIVE storage classes are available in real time.
  2. You must first initiate a restore request and then a temporary copy of the object is available immediately for the duration that you specify in the request.
  3. Objects that are stored in the GLACIER or DEEP_ARCHIVE storage classes are not available in real time.
  4. You must first initiate a restore request and then wait until a temporary copy of the object is available for the duration that you specify in the request.
A
  1. Objects that are stored in the GLACIER or DEEP_ARCHIVE storage classes are available in real time.
  2. You must first initiate a restore request and then a temporary copy of the object is available immediately for the duration that you specify in the request.
  3. Objects that are stored in the GLACIER or DEEP_ARCHIVE storage classes are not available in real time.
  4. You must first initiate a restore request and then wait until a temporary copy of the object is available for the duration that you specify in the request.
95
Q

Which of the following statement are true for S3 lifecycle management configuration? Choose 2.

  1. The configuration rules does not apply to existing objects.
  2. The configuration rules applies to existing objects.
  3. The configuration rules applies to objects that you add later.
  4. The configuration rules applies only to existing objects and not to objects that you add later.
A
  1. The configuration rules does not apply to existing objects.
  2. The configuration rules applies to existing objects.
  3. The configuration rules applies to objects that you add later.
  4. The configuration rules applies only to existing objects and not to objects that you add later.
96
Q

Choose two correct statements for S3 multipart upload?

  1. Part Size can be from 5 MB to 5 GB
  2. Part Size can be from 50 MB to 5 GB
  3. Last part can be < 5 MB
  4. Last part can be < 5 KB
A
  1. Part Size can be from 5 MB to 5 GB
  2. Part Size can be from 50 MB to 5 GB
  3. Last part can be < 5 MB
  4. Last part can be < 5 KB
97
Q

You have a version enabled bucket with two version of an object as shown in the figure below: What will happened when you invoke the delete API specifying only the object’s key, and not the version ID?

  1. S3 will return an error.
  2. S3 deletes all the versions in the bucket.
  3. S3 creates a delete marker and returns its version ID in the response.
  4. S3 will delete the current object ID =222222
A
  1. S3 will return an error.
  2. S3 deletes all the versions in the bucket.
  3. S3 creates a delete marker and returns its version ID in the response.
  4. S3 will delete the current object ID =222222
98
Q

You are solution architect for a Stock Trading web application provider company. Financial regulation mandates them to keep the trading data for five years. From analysis of past internal and customer access behavior you are certain that data more than two year old is unlikely to be accessed, data less than two year old but more than six months old is infrequently accessed. Any data less than six months old will need to have faster access. Currently 150 TB data are stored in in-premise data storage which company is planning to move to AWS cloud storage to save cost. Which is the most cost effective option?

  1. Store the data on Amazon S3 with lifecycle policy that change the storage class from Standard to Standard-IA in six months, from Standard-IA to Glacier in 1.5 years and expiration in 3.5 years.
  2. Store the data on Amazon S3 with lifecycle policy that change the storage class from Standard to Standard-IA in six months, from Standard-IA to Glacier in two year and expiration in five years.
  3. Store all the data in Redshift data warehouse.
  4. Store all the data in EBS general purpose volume attached to EC2 cheapest instance.
A
  1. Store the data on Amazon S3 with lifecycle policy that change the storage class from Standard to Standard-IA in six months, from Standard-IA to Glacier in 1.5 years and expiration in 3.5 years.
  2. Store the data on Amazon S3 with lifecycle policy that change the storage class from Standard to Standard-IA in six months, from Standard-IA to Glacier in two year and expiration in five years.
  3. Store all the data in Redshift data warehouse.
  4. Store all the data in EBS general purpose volume attached to EC2 cheapest instance.
99
Q

You have a bucket with following object versions after you have placed a delete marker. Choose two correct statements from below when you invoke the delete API specifying both the key and object ID.

  1. Delete API invoked with Key=photo.gif and ID=111111, will return an error.
  2. Delete API invoked with Key=photo.gif and ID=456789, will return an error.
  3. Delete API invoked with Key=photo.gif and ID=111111, will delete that version.
  4. Delete API invoked with Key=photo.gif and ID=456789, will delete the delete marker. This makes the object reappear in your bucket.
A
  1. Delete API invoked with Key=photo.gif and ID=111111, will return an error.
  2. Delete API invoked with Key=photo.gif and ID=456789, will return an error.
  3. Delete API invoked with Key=photo.gif and ID=111111, will delete that version.
  4. Delete API invoked with Key=photo.gif and ID=456789, will delete the delete marker. This makes the object reappear in your bucket.
100
Q

You’re a developer at a large retailer, you need to extract and analyze the weekly sales data from a single store, but the data for all 200 stores is saved in a new GZIP-ed CSV every day. Which Amazon S3 feature you will leverage to filter this data to single store thus reducing the amount of data that Amazon S3 transfers, and also reducing the cost and latency to retrieve this data?

  1. S3 SQL
  2. S3 Select
  3. S3 Download
  4. S3 Filter
A
  1. S3 SQL
  2. S3 Select
  3. S3 Download
  4. S3 Filter
101
Q

A solution architect is not sure whether to store the data associated with Amazon EC2 instance in an instance store or in an attached Amazon Elastic Block Store (Amazon EBS) volume. What should be the criteria for choosing storage type? Choose 2.

  1. The Amazon Elastic Block Store (Amazon EBS) volumes is ideal for temporary storage
  2. The instance store is ideal for temporary storage
  3. For data to retain longer, or encrypt the data, use Amazon Elastic Block Store (Amazon EBS) volumes
  4. For data to retain longer, or encrypt the data, use instance store
A
  1. The Amazon Elastic Block Store (Amazon EBS) volumes is ideal for temporary storage
  2. The instance store is ideal for temporary storage
  3. For data to retain longer, or encrypt the data, use Amazon Elastic Block Store (Amazon EBS) volumes
  4. For data to retain longer, or encrypt the data, use instance store
102
Q

How are two different EBS volume types optimized for? Choose 2.

  1. SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS
  2. HDD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS
  3. SSD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS
  4. HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS
A
  1. SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS
  2. HDD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS
  3. SSD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS
  4. HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS
103
Q

What is the main characteristics of two SSD based EBS volume? Choose 2.

  1. General purpose SSD volume balances price and performance for a wide variety of workloads
  2. Provisioned IOPS SSD is highest-performance SSD volume for mission-critical low-latency or high-throughput workloads
  3. Provisioned IOPS SSD volume balances price and performance for a wide variety of workloads
  4. General purpose is highest-performance SSD volume for mission-critical low-latency or high-throughput workloads
A
  1. General purpose SSD volume balances price and performance for a wide variety of workloads
  2. Provisioned IOPS SSD is highest-performance SSD volume for mission-critical low-latency or high-throughput workloads
  3. Provisioned IOPS SSD volume balances price and performance for a wide variety of workloads
  4. General purpose is highest-performance SSD volume for mission-critical low-latency or high-throughput workloads
104
Q

Which of the following statements are true about Amazon EBS? Choose 4.

  1. Provides block level storage volumes for use with EC2 instances.
  2. You can mount multiple volumes on the same instance, and each volume can also be attached to more than one instance at a time.
  3. You can mount multiple volumes on the same instance, but each volume can be attached to only one instance at a time.
  4. You can create a file system on top of these volumes, or use them in any way you would use a block device (like a hard drive).
  5. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance.
A
  1. Provides block level storage volumes for use with EC2 instances.
  2. You can mount multiple volumes on the same instance, and each volume can also be attached to more than one instance at a time.
  3. You can mount multiple volumes on the same instance, but each volume can be attached to only one instance at a time.
  4. You can create a file system on top of these volumes, or use them in any way you would use a block device (like a hard drive).
  5. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance.
105
Q

Which Amazon EBS volume type you will use for critical business applications that require sustained IOPS performance, low latency or high-throughput workloads?

  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
A
  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
106
Q

Which Amazon EBS volume type you will use for large database workloads, such as: MongoDB, Cassandra, Microsoft SQL Server, MySQL, PostgreSQL, Oracle?

  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
A
  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
107
Q

Which Amazon EBS volume type you will use for

  • Streaming workloads requiring consistent, fast throughput at a low price
  • Big data or Data warehouses
  • Log processing
  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
A
  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
108
Q

You have a corporate intranet web application that required 500GB of block storage at 1000 IOPS throughout the day apart from 40 minutes at night when you run a schedule batch process to generate reports during which you require 3000 IOPS. Which Amazon EBS volume will be cost effective?

  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
A
  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
109
Q

You want to create a test environment which will be replica of production environment. To achieve this replication you are planning to use the production environment EC2 instance EBS snapshot to create the new test environment volume. Which of the following statements are correct? Choose 2.

  1. New volumes created from existing EBS snapshots load lazily in the background.
  2. New volumes created from existing EBS snapshots doesn’t load lazily in the background.
  3. After test environment EBS volume is created from production EBS snapshot, there is no need to wait for all of the data to transfer from Amazon S3 to test EBS volume before your attached test EC2 instance can start accessing the volume and all its data.
  4. After test environment EBS volume is created from production EBS snapshot, you have to wait for all of the data to transfer from Amazon S3 to test EBS volume before your attached test EC2 instance can start accessing the volume and all its data.
A
  1. New volumes created from existing EBS snapshots load lazily in the background.
  2. New volumes created from existing EBS snapshots doesn’t load lazily in the background.
  3. After test environment EBS volume is created from production EBS snapshot, there is no need to wait for all of the data to transfer from Amazon S3 to test EBS volume before your attached test EC2 instance can start accessing the volume and all its data.
  4. After test environment EBS volume is created from production EBS snapshot, you have to wait for all of the data to transfer from Amazon S3 to test EBS volume before your attached test EC2 instance can start accessing the volume and all its data.
110
Q

Which of the following are best practices for getting optimal performance from your EBS volumes? Choose 3.

  1. Use EBS-Optimized Instances
  2. Use Compute intensive Instance
  3. Be Aware of the Performance Penalty When Initializing Volumes from Snapshots
  4. Use RAID 0 to Maximize Utilization of Instance Resources
A
  1. Use EBS-Optimized Instances
  2. Use Compute intensive Instance
  3. Be Aware of the Performance Penalty When Initializing Volumes from Snapshots
  4. Use RAID 0 to Maximize Utilization of Instance Resources
111
Q

Which EBS RAID configuration you will use when fault tolerance is more important than I/O performance?

  1. RAID 0
  2. RAID 1
  3. RAID 5
  4. RAID 6
A
  1. RAID 0
  2. RAID 1
  3. RAID 5
  4. RAID 6
112
Q

Which EBS RAID configuration you will use when I/O performance is more important than fault tolerance; for example, as in a heavily used database (where data replication is already set up separately)?

  1. RAID 0
  2. RAID 1
  3. RAID 5
  4. RAID 6
A
  1. RAID 0
  2. RAID 1
  3. RAID 5
  4. RAID 6
113
Q

You are evaluating RAID 0 and RAID 1 options for EBS volumes with two 500 GiB Amazon EBS io1 volumes with 4,000 provisioned IOPS and 500 MiB/s of throughput each. Which of the following two statements are correct?

  1. You can create a 1000 GiB RAID 1 array with an available bandwidth of 8,000 IOPS and 1,000 MiB/s of throughput.
  2. You can create a 500 GiB RAID 0 array with an available bandwidth of 4,000 IOPS and 500 MiB/s of throughput
  3. You can create a 1000 GiB RAID 0 array with an available bandwidth of 8,000 IOPS and 1,000 MiB/s of throughput.
  4. You can create a 500 GiB RAID 1 array with an available bandwidth of 4,000 IOPS and 500 MiB/s of throughput.
A
  1. You can create a 1000 GiB RAID 1 array with an available bandwidth of 8,000 IOPS and 1,000 MiB/s of throughput.
  2. You can create a 500 GiB RAID 0 array with an available bandwidth of 4,000 IOPS and 500 MiB/s of throughput
  3. You can create a 1000 GiB RAID 0 array with an available bandwidth of 8,000 IOPS and 1,000 MiB/s of throughput.
  4. You can create a 500 GiB RAID 1 array with an available bandwidth of 4,000 IOPS and 500 MiB/s of throughput.
114
Q

Which two statements are correct given volume configuration, how I/O characteristics drive the performance behavior for your EBS volumes?

  1. General Purpose SSD (gp2) and Provisioned IOPS SSD (io1)—deliver consistent performance whether an I/O operation is large or sequential.
  2. General Purpose SSD (gp2) and Provisioned IOPS SSD (io1)—deliver consistent performance whether an I/O operation is random or sequential.
  3. HDD-backed volumes—Throughput Optimized HDD (st1) and Cold HDD (sc1)—deliver optimal performance only when I/O operations are large and sequential.
  4. HDD-backed volumes—Throughput Optimized HDD (st1) and Cold HDD (sc1)—deliver optimal performance only when I/O operations are random and sequential.
A
  1. General Purpose SSD (gp2) and Provisioned IOPS SSD (io1)—deliver consistent performance whether an I/O operation is large or sequential.
  2. General Purpose SSD (gp2) and Provisioned IOPS SSD (io1)—deliver consistent performance whether an I/O operation is random or sequential.
  3. HDD-backed volumes—Throughput Optimized HDD (st1) and Cold HDD (sc1)—deliver optimal performance only when I/O operations are large and sequential.
  4. HDD-backed volumes—Throughput Optimized HDD (st1) and Cold HDD (sc1)—deliver optimal performance only when I/O operations are random and sequential.
115
Q

You have recently launched a web application. Backend database is on a high end compute EC2 instance with 500 GB Provisioned IOPS SSD (io1) EBS drive. On analyzing the CloudWatch metrics for EBS you notice that write operation needs performance improvement. Which of the following is the best way to improve write performance of EBS database? Choose 2.

  1. Have two 500 GiB Amazon EBS io1 volumes in RAID 1 configuration.
  2. Use EC2 instance with enhanced networking and put in placement group.
  3. Have two 500 GiB Amazon EBS io1 volumes in RAID 0 configuration.
  4. Use EBS optimized EC2 instance.
A
  1. Have two 500 GiB Amazon EBS io1 volumes in RAID 1 configuration.
  2. Use EC2 instance with enhanced networking and put in placement group.
  3. Have two 500 GiB Amazon EBS io1 volumes in RAID 0 configuration.
  4. Use EBS optimized EC2 instance.
116
Q

Which RAID configuration are recommended for Amazon EBS volumes? Choose 2.

  1. RAID 0
  2. RAID 1
  3. RAID 5
  4. RAID 6
A
  1. RAID 0
  2. RAID 1
  3. RAID 5
  4. RAID 6
117
Q

How can you create a new empty EBS volume with encryption? Choose 2.

  1. By enabling encryption by default, the volume will be automatically encrypted.
  2. Amazon EBS doesn’t support encryption of new empty EBS volume other than default key encryption.
  3. Amazon EBS supports encryption of EBS snapshots only and not of new volume.
  4. By enabling encryption for the specific volume creation operation, you can specify the CMK to be used to encrypt the volume.
A
  1. By enabling encryption by default, the volume will be automatically encrypted.
  2. Amazon EBS doesn’t support encryption of new empty EBS volume other than default key encryption.
  3. Amazon EBS supports encryption of EBS snapshots only and not of new volume.
  4. By enabling encryption for the specific volume creation operation, you can specify the CMK to be used to encrypt the volume.
118
Q

Which of the following statements are correct about Amazon EBS snapshots encryption? Choose 3.

  1. Snapshots of encrypted volumes are automatically encrypted.
  2. Volumes that you create from encrypted snapshots are automatically encrypted.
  3. Snapshots of encrypted volumes are not automatically encrypted.
  4. Volumes that you create from encrypted snapshots are not automatically encrypted.
  5. Volumes that you create from an unencrypted snapshot that you own or have access to can be encrypted on-the-fly.
  6. Volumes that you create from an unencrypted snapshot that you own or have access to cannot be encrypted on-the-fly.
A
  1. Snapshots of encrypted volumes are automatically encrypted.
  2. Volumes that you create from encrypted snapshots are automatically encrypted.
  3. Snapshots of encrypted volumes are not automatically encrypted.
  4. Volumes that you create from encrypted snapshots are not automatically encrypted.
  5. Volumes that you create from an unencrypted snapshot that you own or have access to can be encrypted on-the-fly.
  6. Volumes that you create from an unencrypted snapshot that you own or have access to cannot be encrypted on-the-fly.
119
Q

Which of the following statements are correct about Amazon EBS encryption? Choose 2.

  1. When you copy an unencrypted snapshot that you own, you cannot encrypt it during the copy process.
  2. When you copy an encrypted snapshot that you own or have access to, you cannot reencrypt it with a different key during the copy process.
  3. When you copy an unencrypted snapshot that you own, you can encrypt it during the copy process.
  4. When you copy an encrypted snapshot that you own or have access to, you can reencrypt it with a different key during the copy process.
A
  1. When you copy an unencrypted snapshot that you own, you cannot encrypt it during the copy process.
  2. When you copy an encrypted snapshot that you own or have access to, you cannot reencrypt it with a different key during the copy process.
  3. When you copy an unencrypted snapshot that you own, you can encrypt it during the copy process.
  4. When you copy an encrypted snapshot that you own or have access to, you can reencrypt it with a different key during the copy process.
120
Q

When you create an encrypted EBS volume and attach it to a supported instance type, which types of data are encrypted?

  1. Data at rest inside the volume
  2. All data moving between the volume and the instance
  3. All snapshots created from the volume
  4. All volumes created from those snapshots
  5. All of the above
A
  1. Data at rest inside the volume
  2. All data moving between the volume and the instance
  3. All snapshots created from the volume
  4. All volumes created from those snapshots
  5. All of the above
121
Q

Your company is using EBS for various workloads hosted in AWS. As per new company policies and regulatory requirements for audit and backups you have been instructed to automate creation, retention, and deletion of Amazon Elastic Block Storage (Amazon EBS) snapshots used for backing up Amazon EBS volumes wherever possible. How can you do that?

  1. Use Amazon Data Lifecycle Manager (DLM) to create lifecycle policies to automate EBS snapshot management.
  2. Use AWS Glue to create lifecycle policies to automate EBS snapshot management.
  3. Use AWS Data pipeline to create lifecycle policies to automate EBS snapshot management.
  4. Create a scheduled job which runs twice a day across the workloads system.
A
  1. Use Amazon Data Lifecycle Manager (DLM) to create lifecycle policies to automate EBS snapshot management.
  2. Use AWS Glue to create lifecycle policies to automate EBS snapshot management.
  3. Use AWS Data pipeline to create lifecycle policies to automate EBS snapshot management.
  4. Create a scheduled job which runs twice a day across the workloads system.
122
Q

With changes in security compliance in your organization all the data-in-rest has to be encrypted. To comply with this policy you have to encrypt all the unencrypted EBS volumes attached to EC2 instances running in your corporate applications. How will you convert an unencrypted EBS to encrypted? Choose 2.

  1. An unencrypted EBS volume cannot be converted to encrypt in any way.
  2. Create an EBS snapshot of the volume you want to encrypt. Copy the EBS snapshot, encrypting the copy in the process.
  3. Create a new EBS volume from your new encrypted EBS snapshot. The new EBS volume will be encrypted. Detach the original EBS volume and attach your new encrypted EBS volume, making sure to match the device name.
  4. Create a new, empty EBS volume by enabling encryption by default, the volume is automatically encrypted. Detach the original EBS volume and attach your new encrypted EBS volume, making sure to match the device name.
A
  1. An unencrypted EBS volume cannot be converted to encrypt in any way.
  2. Create an EBS snapshot of the volume you want to encrypt. Copy the EBS snapshot, encrypting the copy in the process.
  3. Create a new EBS volume from your new encrypted EBS snapshot. The new EBS volume will be encrypted. Detach the original EBS volume and attach your new encrypted EBS volume, making sure to match the device name.
  4. Create a new, empty EBS volume by enabling encryption by default, the volume is automatically encrypted. Detach the original EBS volume and attach your new encrypted EBS volume, making sure to match the device name.
123
Q

You have used Amazon Data Lifecycle Manager to automate the creation of snapshots of an EBS volume every day at 11 PM, when the EC2 instance to which it is attached is least used. Which of the following is correct for the scenario when EBS snapshot creation is under progress? Choose 2.

  1. An in-progress snapshot is not affected by ongoing reads and writes to the volume.
  2. You cannot use the EBS while the snapshot is in progress.
  3. EBS volume can be used in read only mode while snapshot is in progress.
  4. Snapshots only capture data that has been written to your Amazon EBS volume at the time the snapshot command is issued.
A
  1. An in-progress snapshot is not affected by ongoing reads and writes to the volume.
  2. You cannot use the EBS while the snapshot is in progress.
  3. EBS volume can be used in read only mode while snapshot is in progress.
  4. Snapshots only capture data that has been written to your Amazon EBS volume at the time the snapshot command is issued.
124
Q

With changes in security compliance in your organization all the data-in-rest and data in transit has to be encrypted. You are releasing a web application to production live which has backend relation database on an EC2 instance with an EBS volume. How can you ensure that data encryption compliance needs are met in most efficient way? Choose 3.

  1. Modify the web application business layer program to encrypt/decrypt the data in transit and data at rest.
  2. Use AWS IAM to encrypt/decrypt the data in transit and data at rest.
  3. Configure your AWS account to enforce the encryption of the new EBS volumes and snapshot copies that you create.
  4. When you create a new, empty EBS volume, you can encrypt it by enabling encryption for the specific volume creation operation.
  5. Use HTTPS/TLS protocol for communication between user and webserver to encrypt data in transit.
A
  1. Modify the web application business layer program to encrypt/decrypt the data in transit and data at rest.
  2. Use AWS IAM to encrypt/decrypt the data in transit and data at rest.
  3. Configure your AWS account to enforce the encryption of the new EBS volumes and snapshot copies that you create.
  4. When you create a new, empty EBS volume, you can encrypt it by enabling encryption for the specific volume creation operation.
  5. Use HTTPS/TLS protocol for communication between user and webserver to encrypt data in transit.
125
Q

You are migrating to AWS, an on-premise legacy application which stores state in files on disk. The current storage size is 2 PB. For application layer you will be using a fleet of EC2 instances in an auto scale group which will access these files concurrently. You are evaluating different storage options in AWS based on following criteria:

  • Should be able to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
  • Ability to enable lifecycle management on your file system, files not accessed according to the lifecycle policy should be automatically and transparently moved into lower cost storage class
  • Parallel shared access to multiple Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

Which storage options you should choose?

  1. S3
  2. EFS
  3. EBS
  4. RDS
A
  1. S3
  2. EFS
  3. EBS
  4. RDS
126
Q

What is the main characteristics of two HDD based EBS volume? (Choose 2)

  1. Throughput Optimized HDD is Low-cost HDD volume designed for frequently accessed, throughput-intensive workloads
  2. Cold HDD is Lowest cost HDD volume designed for less frequently accessed workloads
  3. Throughput Optimized is Lowest cost HDD volume designed for less frequently accessed workloads
  4. Cold HDD is Low-cost HDD volume designed for frequently accessed, throughput-intensive workloads
A
  1. Throughput Optimized HDD is Low-cost HDD volume designed for frequently accessed, throughput-intensive workloads
  2. Cold HDD is Lowest cost HDD volume designed for less frequently accessed workloads
  3. Throughput Optimized is Lowest cost HDD volume designed for less frequently accessed workloads
  4. Cold HDD is Low-cost HDD volume designed for frequently accessed, throughput-intensive workloads
127
Q

You have attached an EBS volume to an EC2 instance in us-west-1a AZ. Is an EBS volume fault tolerant to full AZ failure?

  1. Yes EBS volume is automatically replicated to three AZs in the region it is created.
  2. Yes if Multi-AZ is enabled for the EBS volume.
  3. No, EBS volume is automatically replicated within one zone only in which it is created.
  4. Yes if the EC2 instance auto scale to multi AZs.
A
  1. Yes EBS volume is automatically replicated to three AZs in the region it is created.
  2. Yes if Multi-AZ is enabled for the EBS volume.
  3. No, EBS volume is automatically replicated within one zone only in which it is created.
  4. Yes if the EC2 instance auto scale to multi AZs.
128
Q

Which Amazon EBS volume type you will use that balances price and performance for a wide variety of workloads?

  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
A
  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
129
Q

You run an online photo editing website for two type of members: free members and fee paying premium members. The set of editing requests and photos is placed asynchronously in a SQS queue which is then processed by worker EC2 instances in an auto scaling group. The architecture has two SQS queues, one for premium members and one for free members editing task. You have on-demand EC2 instances in an auto scale group to process the messages in the premium members queue and spot instances for processing the message from free member queue. You want to use S3 for storing the photo image files. Which of the following will be most cost optimized way of leveraging S3 without compromising best performance service to premium members and free members?

  1. Use S3-Standard-IA for premium members and S3-One Zone-IA for free members.
  2. Use S3-Standard for free members and S3-One Zone-IA for premium members.
  3. Use S3-Standard for premium members and S3-Glacier for free members.
  4. Use S3-Standard for premium members and S3-One Zone-IA for free members.
A
  1. Use S3-Standard-IA for premium members and S3-One Zone-IA for free members.
  2. Use S3-Standard for free members and S3-One Zone-IA for premium members.
  3. Use S3-Standard for premium members and S3-Glacier for free members.
  4. Use S3-Standard for premium members and S3-One Zone-IA for free members.
130
Q

Which Amazon EBS volume type is a low-cost HDD volume designed for frequently accessed, throughput-intensive workloads?

  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
A
  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
131
Q

Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all Regions. Which of the following operation can result in stale old data? Choose 3.

  1. GET after a DELETE
  2. LIST after a DELETE
  3. GET after PUT to an existing object
  4. GET after PUT of new object
A
  1. GET after a DELETE
  2. LIST after a DELETE
  3. GET after PUT to an existing object
  4. GET after PUT of new object
132
Q

Which Amazon EBS volume type is lowest cost HDD volume designed for less frequently accessed workloads?

  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
A
  1. General Purpose SSD (gp2)
  2. Provisioned IOPS SSD (io1)
  3. Throughput Optimized HDD (st1)
  4. Cold HDD (sc1)
133
Q

You are the solution architect for a law firm which is using S3 to store numerous documents related to cases handled by their lawyers. Recently one of the employee inadvertely deleted few important documents stored in bucket. Luckily another employee had the local copy in his computer and you were able to restore the document in the bucket. You have been asked to do configuration changes in S3 so that such unintentional mistakes can be avoided and even if it happens there should be easier way to recover form it ?Choose 3.

  1. Set S3 object lock
  2. Enable S3 cross region replication
  3. Enable bucket versioning
  4. Enable MFA delete on a bucket
A
  1. Set S3 object lock
  2. Enable S3 cross region replication
  3. Enable bucket versioning
  4. Enable MFA delete on a bucket
134
Q

You want to automate the creation, retention, and deletion of snapshots taken to back up your Amazon EBS volumes. How can you achieve this?

  1. Write a java or .net program which will run at a scheduled time.
  2. Use Cloudtrail , CloudWatch and CloudFormation
  3. User EBS Automate
  4. Use Amazon Data Lifecycle Manager (Amazon DLM)
A
  1. Write a java or .net program which will run at a scheduled time.
  2. Use Cloudtrail , CloudWatch and CloudFormation
  3. User EBS Automate
  4. Use Amazon Data Lifecycle Manager (Amazon DLM)
135
Q

Which S3 feature evaluates your bucket access policies and enables you to discover and swiftly remediate buckets with potentially unintended access?

  1. Access Analyzer for S3
  2. Policy Analyzer for S3
  3. Bucket Analyzer for S3
  4. Amazon Inspector
A
  1. Access Analyzer for S3
  2. Policy Analyzer for S3
  3. Bucket Analyzer for S3
  4. Amazon Inspector
136
Q

What are available retrieval options when restoring an archived object from S3? Choose 3.

  1. Expedited
  2. Standard
  3. Urgent
  4. Bulk
  5. Immediate
A
  1. Expedited
  2. Standard
  3. Urgent
  4. Bulk
  5. Immediate
137
Q

You want to host your static website on Amazon S3. You registered a domain example.com in Route 53, and you want requests for http://www.example.com and http://example.com to be served from your Amazon S3 content. Which of the following steps you will do to achieve this? Choose 3.

  1. Create only one bucket example.com to host your content. Upload your index document and optional website content to your bucket.
  2. Create two buckets. You will host your content out of the root domain bucket (example.com), and you will create a redirect request for the subdomain bucket (www.example.com). Upload your index document and optional website content to your root domain bucket.
  3. Make example.com bucket publically readable. Disable block public access for the bucket and write a bucket policy that allows public read access.
  4. Create the alias records in the hosted zone for your domain maps example.com and www.example.com. The alias records use the Amazon S3 website endpoints.
A
  1. Create only one bucket example.com to host your content. Upload your index document and optional website content to your bucket.
  2. Create two buckets. You will host your content out of the root domain bucket (example.com), and you will create a redirect request for the subdomain bucket (www.example.com). Upload your index document and optional website content to your root domain bucket.
  3. Make example.com bucket publically readable. Disable block public access for the bucket and write a bucket policy that allows public read access.
  4. Create the alias records in the hosted zone for your domain maps example.com and www.example.com. The alias records use the Amazon S3 website endpoints.
138
Q

You are the solution architect for a financial services company who needs to store a set of trading records for 7 years to meet regulatory compliance requirements. The records should be immutable during this period. You also never want any user, including the root user in your AWS account, to be able to delete the objects during the retention period. What features of S3 you should use to achieve this requirements?

  1. Use Object Locking with retention period of 7 years in compliance mode.
  2. Use Object Locking with retention period of 7 years in governance mode.
  3. Use Object Locking with legal period of 7 years in compliance mode.
  4. Use Object Locking with legal period of 7 years in governance mode.
A
  1. Use Object Locking with retention period of 7 years in compliance mode.
  2. Use Object Locking with retention period of 7 years in governance mode.
  3. Use Object Locking with legal period of 7 years in compliance mode.
  4. Use Object Locking with legal period of 7 years in governance mode.
139
Q

For which events Amazon S3 can publish notifications? Choose 5.

  1. New object created events
  2. Object removal events
  3. New bucket created events
  4. Bucket removal events
  5. Restore object events
  6. Reduced Redundancy Storage (RRS) object lost events
  7. Replication events
A
  1. New object created events
  2. Object removal events
  3. New bucket created events
  4. Bucket removal events
  5. Restore object events
  6. Reduced Redundancy Storage (RRS) object lost events
  7. Replication events
140
Q

What are different typed of object replication available in S3? Choose 2.

  1. Cross-Region replication (CRR)
  2. Subnet-to-Subnet replication (SSR)
  3. Account-to-Account replication (AAR)
  4. Same-Region replication (SRR)
A
  1. Cross-Region replication (CRR)
  2. Subnet-to-Subnet replication (SSR)
  3. Account-to-Account replication (AAR)
  4. Same-Region replication (SRR)
141
Q

Why should you use S3 Replication? Choose 4.

  1. Replicate objects while retaining metadata
  2. Replicate objects into different storage classes
  3. Replicate objects across VPCs
  4. Maintain object copies under different ownership
  5. Replicate objects within 15 minutes
A
  1. Replicate objects while retaining metadata
  2. Replicate objects into different storage classes
  3. Replicate objects across VPCs
  4. Maintain object copies under different ownership
  5. Replicate objects within 15 minutes
142
Q

Which of the following is not a use case for S3 Same Region Replication (SRR)?

  1. Aggregate logs into a single bucket
  2. Production and test accounts that uses same data
  3. Store multiple copies of your data in separate AWS accounts within a certain Region to abide by data sovereignty rules.
  4. Reduce latency of global users.
A
  1. Aggregate logs into a single bucket
  2. Production and test accounts that uses same data
  3. Store multiple copies of your data in separate AWS accounts within a certain Region to abide by data sovereignty rules.
  4. Reduce latency of global users.
143
Q

Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. What is replicated by default in S3? Choose 3.

  1. Objects that existed before you added the replication configuration to the bucket.
  2. Objects created after you add a replication configuration.
  3. Unencrypted objects and Objects encrypted at rest under Amazon S3 managed keys (SSE-S3) or customer master keys (CMKs) stored in AWS Key Management Service (SSE-KMS).
  4. Only objects in the source bucket for which the bucket owner has permissions to read objects and access control lists (ACLs).
  5. Objects created with server-side encryption using customer-provided (SSE-C) encryption keys.
A
  1. Objects that existed before you added the replication configuration to the bucket.
  2. Objects created after you add a replication configuration.
  3. Unencrypted objects and Objects encrypted at rest under Amazon S3 managed keys (SSE-S3) or customer master keys (CMKs) stored in AWS Key Management Service (SSE-KMS).
  4. Only objects in the source bucket for which the bucket owner has permissions to read objects and access control lists (ACLs).
  5. Objects created with server-side encryption using customer-provided (SSE-C) encryption keys.
144
Q

Suppose that you configure replication where bucket A is the source and bucket B is the destination. Now suppose that you add another replication configuration where bucket B is the source and bucket C is the destination. Which of the following is correct?

  1. After you configure B as the source bucket, object replication from A to B will stop.
  2. You cannot configure an existing ‘destination’ bucket as ‘source’ bucket.
  3. Objects in bucket B that are replicas of objects in bucket A are replicated to bucket C.
  4. Objects in bucket B that are replicas of objects in bucket A are not replicated to bucket C.
A
  1. After you configure B as the source bucket, object replication from A to B will stop.
  2. You cannot configure an existing ‘destination’ bucket as ‘source’ bucket.
  3. Objects in bucket B that are replicas of objects in bucket A are replicated to bucket C.
  4. Objects in bucket B that are replicas of objects in bucket A are not replicated to bucket C.
145
Q

Suppose that you configure replication where bucket A is the source and bucket B is the destination. Which of the following statement is correct when in bucket A you specify an object version ID in a DELETE request?

  1. Amazon S3 adds a delete marker in bucket A. It does replicate the delete marker in the destination bucket B.
  2. Amazon S3 deletes that object version in the bucket A. But it doesn’t replicate the deletion in the destination bucket B.
  3. Amazon S3 deletes that object version in the bucket A and replicates the deletion in the destination bucket B.
  4. Amazon S3 adds a delete marker in bucket A. But it doesn’t replicate the delete marker in the destination bucket B.
A
  1. Amazon S3 adds a delete marker in bucket A. It does replicate the delete marker in the destination bucket B.
  2. Amazon S3 deletes that object version in the bucket A. But it doesn’t replicate the deletion in the destination bucket B.
  3. Amazon S3 deletes that object version in the bucket A and replicates the deletion in the destination bucket B.
  4. Amazon S3 adds a delete marker in bucket A. But it doesn’t replicate the delete marker in the destination bucket B.
146
Q

Which Amazon service has following features:

  • Fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources
  • Built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
  • Designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.
  1. Amazon S3
  2. Amazon RDS
  3. Amazon EFS
  4. Amazon EBS
A
  1. Amazon S3
  2. Amazon RDS
  3. Amazon EFS
  4. Amazon EBS
147
Q

Your company is planning to use WordPress hosted on AWS for corporate website. You are planning to run your WordPress site using an auto scaling group of Amazon EC2 instances and database layer on Amazon RDS Aurora. Which Amazon service you should use to store shared, unstructured WordPress data like php files, config themes, plugin etc. This storage service should be accessible by multiple WordPress EC2 instances.

  1. Amazon S3
  2. Amazon RDS
  3. Amazon EFS
  4. Amazon EBS
A
  1. Amazon S3
  2. Amazon RDS
  3. Amazon EFS
  4. Amazon EBS
148
Q

You can mount your Amazon EFS file systems on your on-premises servers, and move file data to and from Amazon EFS using standard Linux tools and scripts or AWS DataSync. Which use cases can you do by enabling access to EFS file systems from on-premises servers with ability to move file data to and from Amazon EFS file?

  1. You can migrate data from on-premises datacenters to permanently reside in Amazon EFS file systems.
  2. You can support cloud bursting workloads to offload your application processing to the cloud. You can move data from your on-premises servers into your EFS file systems, analyze it on a cluster of EC2 instances in your Amazon VPC, and store the results permanently in your EFS file systems or move the results back to your on-premises servers.
  3. You can periodically copy your on-premises file data to EFS to support backup and disaster recovery scenarios.
  4. All of the above
A
  1. You can migrate data from on-premises datacenters to permanently reside in Amazon EFS file systems.
  2. You can support cloud bursting workloads to offload your application processing to the cloud. You can move data from your on-premises servers into your EFS file systems, analyze it on a cluster of EC2 instances in your Amazon VPC, and store the results permanently in your EFS file systems or move the results back to your on-premises servers.
  3. You can periodically copy your on-premises file data to EFS to support backup and disaster recovery scenarios.
  4. All of the above
149
Q

You are the solution architect for a media company which is planning to migrate on-premise applications to AWS. You are analyzing the workflows like video editing, studio production, broadcast processing, sound design, and rendering which uses an existing shared storage to process large files. Which Amazon service you will use that provides a:

  • strong data consistency model with high throughput
  • scale on demand to petabytes without disrupting applications
  • growing and shrinking automatically as you add and remove files
  • shared file access which can cut the time it takes to perform these jobs
  • ability to consolidate multiple local file repositories into a single location accessible by application deployed on multiple EC2 instances
  1. Amazon EFS
  2. Amazon EBS
  3. Amazon S3
  4. Amazon RDS
A
  1. Amazon EFS
  2. Amazon EBS
  3. Amazon S3
  4. Amazon RDS
150
Q

Which AWS service provides detailed records for the requests that are made to a bucket in form of requester, bucket name, request time, request action, response status, and an error code, if relevant.?

  1. Cloud trail
  2. VPC Flow Logs
  3. Cloudwatch
  4. Server Access Logging
A
  1. Cloud trail
  2. VPC Flow Logs
  3. Cloudwatch
  4. Server Access Logging
151
Q

You have created a bucket to store photos which you took during your wildlife safari in Africa. By default, Block Public Access settings is set to True on this bucket. Some of the photos captures the natural landscape beauty which you want to make it publicly readable. Other photos you don’t want to make publicly readable as they are personal. You have tagged the photos which you want to make publicly readable. How can you change the permissions to do that? Choose 3.

  1. Create an IAM role and attach it to the bucket.
  2. Remove Block Public Access settings on the bucket.
  3. Use a bucket policy that grants public read access to a specific object tag.
  4. Update the object’s access control list (ACL) for photo objects which you want to make accessible.
A
  1. Create an IAM role and attach it to the bucket.
  2. Remove Block Public Access settings on the bucket.
  3. Use a bucket policy that grants public read access to a specific object tag.
  4. Update the object’s access control list (ACL) for photo objects which you want to make accessible.
152
Q

Your company is planning to migrate their on-premise archive storage of 5 TB to AWS. You have a 1000 Mbs connection that you can solely dedicate to transferring your data to S3.What will be most economical choice to transfer data to S3?

  1. Snowball
  2. Snowball Edge
  3. Internet
  4. None of the above
A
  1. Snowball
  2. Snowball Edge
  3. Internet
  4. None of the above
153
Q

What are the different storage models available for Snowball and Snowball Edge? Choose 2.

  1. Snowball : 80 TB and 50 TB models
  2. Snowball edge: 80 TB and 50 TB models
  3. Snowball: 100 TB
  4. Snowball edge : 100 TB
A
  1. Snowball : 80 TB and 50 TB models
  2. Snowball edge: 80 TB and 50 TB models
  3. Snowball: 100 TB
  4. Snowball edge : 100 TB
154
Q

You have 80 TB of on premise data which you want to upload to S3. You have a 100 Mbs connection that you can solely dedicate to transferring your data. Which of the following is best cost optimized way to import data to S3 as soon as possible?

  1. Use one Snowball 80 TB device
  2. Use two Snowball 50 TB device
  3. Use one Snowball Edge 100 TB storage optimized device
  4. Through 100mbs internet connection
A
  1. Use one Snowball 80 TB device
  2. Use two Snowball 50 TB device
  3. Use one Snowball Edge 100 TB storage optimized device
  4. Through 100mbs internet connection
155
Q

You are using Amazon File Storage Gateway with your intranet client application. You know that files are stored as objects in your S3 buckets and you can configure the initial storage class for objects that file gateway creates. There is a one-to-one relationship between files and objects, and you can configure the initial storage class for objects that file gateway creates. You have created a file gateway with hostname file.amazon.com and have mapped it with S3 bucket my-bucket. The mount point exposed by File Gateway is file.amazon.com:/export/my-bucket. You have mounted this locally on /mnt/my-bucket and created a file named file.html in a directory /mnt/my-bucket/dir. How this file be stored in the S3 bucket?

  1. This file will be stored as a file in the S3 bucket my-bucket with a key of dir/file.html.
  2. This file will be stored as an object in the S3 bucket my-bucket with a key of dir/file.html.
  3. This file will be stored as a file in the S3 bucket my-bucket with a key of file.html.
  4. This file will be stored as an object in the S3 bucket my-bucket with a key of file.html.
A
  1. This file will be stored as a file in the S3 bucket my-bucket with a key of dir/file.html.
  2. This file will be stored as an object in the S3 bucket my-bucket with a key of dir/file.html.
  3. This file will be stored as a file in the S3 bucket my-bucket with a key of file.html.
  4. This file will be stored as an object in the S3 bucket my-bucket with a key of file.html.
156
Q

What you must do to enable S3 server access logging? Choose 2.

  1. Grant the Amazon S3 Log Delivery group write permission on the source bucket for which you want Amazon S3 to deliver access logs.
  2. Turn on the log delivery by adding logging configuration on the source bucket for which you want Amazon S3 to deliver access logs.
  3. Grant the Amazon S3 Log Delivery group write permission on the target bucket where you want the access logs saved.
  4. Turn on the log delivery by adding logging configuration on the target bucket where you want the access logs saved.
A
  1. Grant the Amazon S3 Log Delivery group write permission on the source bucket for which you want Amazon S3 to deliver access logs.
  2. Turn on the log delivery by adding logging configuration on the source bucket for which you want Amazon S3 to deliver access logs.
  3. Grant the Amazon S3 Log Delivery group write permission on the target bucket where you want the access logs saved.
  4. Turn on the log delivery by adding logging configuration on the target bucket where you want the access logs saved.
157
Q

How are Amazon S3 and Amazon S3 Glacier designed to achieve 99.999999999% durability? Choose 2.

  1. Amazon S3 Standard, S3 Standard-IA, and S3 Glacier storage classes redundantly store your objects on multiple devices across a minimum of three Availability Zones (AZs)
  2. S3 One Zone-IA storage class stores data redundantly across multiple devices within a single AZ.
  3. Amazon S3 Standard, S3 Standard-IA, and S3 Glacier storage classes redundantly store your objects on multiple devices across a minimum of two Availability Zones (AZs)
  4. Amazon S3 Standard, S3 Standard-IA, and S3 Glacier storage classes redundantly store your objects on multiple devices across a minimum of six Availability Zones (AZs)
A
  1. Amazon S3 Standard, S3 Standard-IA, and S3 Glacier storage classes redundantly store your objects on multiple devices across a minimum of three Availability Zones (AZs)
  2. S3 One Zone-IA storage class stores data redundantly across multiple devices within a single AZ.
  3. Amazon S3 Standard, S3 Standard-IA, and S3 Glacier storage classes redundantly store your objects on multiple devices across a minimum of two Availability Zones (AZs)
  4. Amazon S3 Standard, S3 Standard-IA, and S3 Glacier storage classes redundantly store your objects on multiple devices across a minimum of six Availability Zones (AZs)
158
Q

You are the solution architect for a company that provides online stock trading website. To comply with financial regulation you have to store the trading data for five years. You are evaluating different S3 storage options for archive storage. Your evaluation criteria are:

  • Optimized cost
  • Retrieval time in 1-5 minutes -
  • Reliable and predictable access to a subset of your data in minutes.

Which of the following statement is correct? Choose 2.

  1. Store the data in Deep Archive and use Expedited retrieval option
  2. Purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes
  3. Store the data in Glacier and use Standard retrieval option
  4. Store the data in Glacier and use Expedited retrieval option
A
  1. Store the data in Deep Archive and use Expedited retrieval option
  2. Purchase provisioned retrieval capacity if your workload requires highly reliable and predictable access to a subset of your data in minutes
  3. Store the data in Glacier and use Standard retrieval option
  4. Store the data in Glacier and use Expedited retrieval option
159
Q

IOPS are a unit of measure representing input/output operations per second measured in KiB. What is the maximum amount of data that an Amazon EBS volume type counts as a single I/O? Choose 2.

  1. I/O size is capped at 256 KiB for HDD volumes
  2. I/O size is capped at 1,024 KiB for HDD volumes
  3. I/O size is capped at 256 KiB for SSD volumes
  4. I/O size is capped at 1,024 KiB for SSD volumes
A
  1. I/O size is capped at 256 KiB for HDD volumes
  2. I/O size is capped at 1,024 KiB for HDD volumes
  3. I/O size is capped at 256 KiB for SSD volumes
  4. I/O size is capped at 1,024 KiB for SSD volumes
160
Q

For an Amazon EBS SSD volumes, a single 1,024 KiB I/O operation will be counted as how many operation/s?

  1. 1 operation
  2. 16 operations
  3. 32 operations
  4. 4 operations
A
  1. 1 operation
  2. 16 operations
  3. 32 operations
  4. 4 operations
161
Q

You are using a General Purpose SSD (gp2) EBS volume of size 1000 GiB You know that with this volume size you will have burst credits available to an IOPS limit of 3,000 and a volume throughput limit of 250 MiB/s. Which of the following will be appropriate I/O size?

  1. 256 KiB
  2. 1024 KiB
  3. 16KiB
  4. 128KiB
A
  1. 256 KiB
  2. 1024 KiB
  3. 16KiB
  4. 128KiB
162
Q

You are using a Provisioned IOPS SSD (io1) EBS Volume of size 100 GiB. What is the maximum IOPS you can provision for this volume?

  1. 100
  2. 300
  3. 3000
  4. 5000
A
  1. 100
  2. 300
  3. 3000
  4. 5000
163
Q

Which of the following is correct for credits and burst performance feature of Throughput Optimized HDD (st1) EBS Volumes? Choose 2.

  1. St1 provides Throughput Credits and Burst Performance.
  2. For a 1-TiB st1 volume, burst throughput is limited to 250 MiB/s, the bucket fills with throughput credits at 40 MiB/s, and it can hold up to 1 TiB-worth of credits.
  3. St1 provides I/O Credits and Burst Performance.
  4. For a 1-TiB st1 volume, burst I/O is limited to 3000, the credit bucket is not applicable.
A
  1. St1 provides Throughput Credits and Burst Performance.
  2. For a 1-TiB st1 volume, burst throughput is limited to 250 MiB/s, the bucket fills with throughput credits at 40 MiB/s, and it can hold up to 1 TiB-worth of credits.
  3. St1 provides I/O Credits and Burst Performance.
  4. For a 1-TiB st1 volume, burst I/O is limited to 3000, the credit bucket is not applicable.
164
Q

For greater I/O performance you are using RAID 0 configuration for EBS volumes attached to your EC2 instances with two 500 GiB Amazon EBS io1 volumes with 4,000 provisioned IOPS each. Your company’s backup and disaster recovery strategy mandates taking regular snapshots of EBS volumes. Which of the following statements is correct for creating snapshots of Amazon Elastic Block Store (Amazon EBS) volumes that are configured in a RAID array?

  1. You cannot create snapshots for Amazon EBS volumes that are configured in a RAID array.
  2. To create snapshots for Amazon EBS volumes that are configured in a RAID array, use the multi-volume snapshot feature of your instance.
  3. Follow a mutli-step process: pause I/O or stop the instance to temporarily disable write access, create snapshots for each of your volumes, and then resume I/O.
  4. You can create snapshots for each of your volumes individually, AWS will ensure that they are in sync with relative to each other.
A
  1. You cannot create snapshots for Amazon EBS volumes that are configured in a RAID array.
  2. To create snapshots for Amazon EBS volumes that are configured in a RAID array, use the multi-volume snapshot feature of your instance.
  3. Follow a mutli-step process: pause I/O or stop the instance to temporarily disable write access, create snapshots for each of your volumes, and then resume I/O.
  4. You can create snapshots for each of your volumes individually, AWS will ensure that they are in sync with relative to each other.
165
Q

You are a solution architect for a global steel manufacturing company having plants across the globe. Recently an analytical and reporting application was launched in us-west region which involves each manufacturing plant uploading their weekly production data across the globe to a S3 bucket in us-west-1 region. The size of weekly production data file ranges from gigabytes to petabytes. After the first week of release feedback came from plants in countries other than US that they are experiencing slow upload times. How can you make the process of uploading the files to S3 faster?

  1. Use S3 multipart upload.
  2. Change you design to first upload the data in region closest to the plan , then replicate it to us-west-1 central bucket using cross-region replication.
  3. Use S3 Transfer Acceleration.
  4. Use Amazon Cloudfront.
A
  1. Use S3 multipart upload.
  2. Change you design to first upload the data in region closest to the plan , then replicate it to us-west-1 central bucket using cross-region replication.
  3. Use S3 Transfer Acceleration.
  4. Use Amazon Cloudfront.
166
Q

You are evaluating which type of EBS volume to use for large, sequential cold-data workloads. This volume type should be optimized for workloads involving large, sequential I/O. Which is the appropriate choice if you require infrequent access to your data and are looking to save costs?

  1. Cold HDD (sc1) Volumes
  2. Throughput Optimized HDD (st1) Volumes
  3. Provisioned IOPS SSD (io1)
  4. General Purpose SSD (gp2)
A
  1. Cold HDD (sc1) Volumes
  2. Throughput Optimized HDD (st1) Volumes
  3. Provisioned IOPS SSD (io1)
  4. General Purpose SSD (gp2)
167
Q

You are evaluating which type of EBS volume to use for large, sequential workloads. This volume type should be optimized for workloads involving large, sequential I/O. Which is the appropriate choice if you require frequent access to your data and are looking to save costs?

  1. Cold HDD (sc1) Volumes
  2. Throughput Optimized HDD (st1) Volumes
  3. Provisioned IOPS SSD (io1)
  4. General Purpose SSD (gp2)
A
  1. Cold HDD (sc1) Volumes
  2. Throughput Optimized HDD (st1) Volumes
  3. Provisioned IOPS SSD (io1)
  4. General Purpose SSD (gp2)
168
Q

You have are designing a web application on AWS and contemplating which type of EBS volume to use for your OLTP database.The volume should be optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS. Your analysis shows that the volume should support Max IOPS of nearly 50,000. Which EBS volume will meet the design criteria?

  1. Cold HDD (sc1) Volumes
  2. Throughput Optimized HDD (st1) Volumes
  3. Provisioned IOPS SSD (io1)
  4. General Purpose SSD (gp2)
A
  1. Cold HDD (sc1) Volumes
  2. Throughput Optimized HDD (st1) Volumes
  3. Provisioned IOPS SSD (io1)
  4. General Purpose SSD (gp2)
169
Q

You are designing an internal application in which you are using S3 to store documents to be uploaded by the employees. You don’t want to create separate IAM user for each employee to manage access. How can you ensure that employees have only ‘upload’ access to the bucket?

  1. You will have to create IAM user for each employee and attach only ‘upload’ permission for the user to that bucket.
  2. Create a persigned url to upload an object to the bucket and share it with employees.
  3. Make the bucket public and as soon as employees have uploaded the document change it to be private.
  4. You will have to create IAM role for each employee and attach only ‘upload’ permission for the user to that bucket.
A
  1. You will have to create IAM user for each employee and attach only ‘upload’ permission for the user to that bucket.
  2. Create a presigned url to upload an object to the bucket and share it with employees.
  3. Make the bucket public and as soon as employees have uploaded the document change it to be private.
  4. You will have to create IAM role for each employee and attach only ‘upload’ permission for the user to that bucket.
170
Q

Which of the following statement is not correct about S3 storage classes? Choose 2.

  1. Standard – Designed for frequently accessed data.
  2. Standard-IA – Designed for long-lived, infrequently accessed data.
  3. One Zone-IA – Designed for long-lived, infrequently accessed, non-critical data.
  4. Glacier – Designed for long-lived, infrequent accessed, archived critical data.
  5. Standard – Designed for long-lived, infrequently accessed data.
  6. Glacier Deep Archive - Lowest cost storage class designed for long-term retention of data that will be retained for 7-10 years and Retrieval time within 12 hours
  7. One Zone-IA – Designed for long-lived, infrequently accessed, critical data.
A
  1. Standard – Designed for frequently accessed data.
  2. Standard-IA – Designed for long-lived, infrequently accessed data.
  3. One Zone-IA – Designed for long-lived, infrequently accessed, non-critical data.
  4. Glacier – Designed for long-lived, infrequent accessed, archived critical data.
  5. Standard – Designed for long-lived, infrequently accessed data.
  6. Glacier Deep Archive - Lowest cost storage class designed for long-term retention of data that will be retained for 7-10 years and Retrieval time within 12 hours
  7. One Zone-IA – Designed for long-lived, infrequently accessed, critical data.
171
Q

Your company is planning to use S3 for storing daily transaction records. You know that transaction records will be accessed very infrequently after one year. You want to move the transaction record files to lower cost infrequent access storage if it has not been accessed for 30 days. But you cannot configure the lifecycle from the day of the creation to one year as there is no defined access patterns and it is unpredictable. What should you do to optimize your cost?

  1. Use Intelligent-Tiering storage class for your transaction record objects.
  2. Use Standard storage class for your transaction record objects and create a lifecycle policy to movie it to Standard-IA after 30 days.
  3. Use Standard storage class for your transaction record objects and write a custom program which moves objects to Standard-IA if not accessed in last 30 days.
  4. Use Standard storage class for your transaction record objects and create a lifecycle policy to movie it to OneZone-IA after 30 days.
A
  1. Use Intelligent-Tiering storage class for your transaction record objects.
  2. Use Standard storage class for your transaction record objects and create a lifecycle policy to movie it to Standard-IA after 30 days.
  3. Use Standard storage class for your transaction record objects and write a custom program which moves objects to Standard-IA if not accessed in last 30 days.
  4. Use Standard storage class for your transaction record objects and create a lifecycle policy to movie it to OneZone-IA after 30 days.
172
Q

Which of the following is not a feature of S3 Glacier storage class?

  1. Data is stored in Amazon S3 Glacier in “archives.”
  2. A single archive can be as large as 40 terabytes. You can store an unlimited number of archives and an unlimited amount of data in Amazon S3 Glacier.
  3. Data stored in Amazon S3 Glacier is mutable, meaning that after an archive is created it can be updated.
  4. Amazon S3 Glacier uses “vaults” as containers to store archives. You can also set access policies for each vault to grant or deny specific activities to users.
  5. You can specify controls such as “Write Once Read Many” (WORM) in a Vault Lock policy and lock the policy from future edits.
  6. Data stored in Amazon S3 Glacier is immutable, meaning that after an archive is created it cannot be updated.
A
  1. Data is stored in Amazon S3 Glacier in “archives.”
  2. A single archive can be as large as 40 terabytes. You can store an unlimited number of archives and an unlimited amount of data in Amazon S3 Glacier.
  3. Data stored in Amazon S3 Glacier is mutable, meaning that after an archive is created it can be updated.
  4. Amazon S3 Glacier uses “vaults” as containers to store archives. You can also set access policies for each vault to grant or deny specific activities to users.
  5. You can specify controls such as “Write Once Read Many” (WORM) in a Vault Lock policy and lock the policy from future edits.
  6. Data stored in Amazon S3 Glacier is immutable, meaning that after an archive is created it cannot be updated.
173
Q

Your company employees’ uses Linux based desktop. The company has a local network folder having more than 10 TB of word and excel files. Any newly created file is infrequently accessed after 30 days. As the company has adopted AWS as part of their IT strategy. Which AWS services should they use so that files are accessible from on premise and also leverage low cost storage for infrequently access data?

  1. Use File Gateway after migrating files to Amazon S3, File gateway supports Amazon S3 Standard, S3 Standard - Infrequent Access (S3 Standard - IA) and S3 One Zone – IA, access files using Network File System (NFS)
  2. Use EFS which supports the Network File System. Migrate documents to EFS. Amazon EFS offers two storage classes, Standard and Infrequent Access. Create and Mount a File System On-Premises with AWS Direct Connect and VPN
  3. Migrate documents to S3 and make use of Standard and Infrequent Access storage class.
  4. Use Volume Gateway after migrating files to Amazon S3, Volume gateway supports Amazon S3 Standard, S3 Standard - Infrequent Access (S3 Standard - IA) and S3 One Zone – IA, access files using Network File System (NFS)
A
  1. Use File Gateway after migrating files to Amazon S3, File gateway supports Amazon S3 Standard, S3 Standard - Infrequent Access (S3 Standard - IA) and S3 One Zone – IA, access files using Network File System (NFS)
  2. Use EFS which supports the Network File System. Migrate documents to EFS. Amazon EFS offers two storage classes, Standard and Infrequent Access. Create and Mount a File System On-Premises with AWS Direct Connect and VPN
  3. Migrate documents to S3 and make use of Standard and Infrequent Access storage class.
  4. Use Volume Gateway after migrating files to Amazon S3, Volume gateway supports Amazon S3 Standard, S3 Standard - Infrequent Access (S3 Standard - IA) and S3 One Zone – IA, access files using Network File System (NFS)
174
Q

What is the smallest file size that can be stored in S3?

  1. 0 bytes
  2. 1 bytes
  3. 1 KB
  4. 100 KB
A
  1. 0 bytes
  2. 1 bytes
  3. 1 KB
  4. 100 KB
175
Q

What HTTP response code you will receive after successful upload of an object to S3 bucket?

  1. HTTP 100
  2. HTTP 200
  3. HTTP 300
  4. HTTP 400
A
  1. HTTP 100
  2. HTTP 200
  3. HTTP 300
  4. HTTP 400
176
Q

For which operation S3 offers eventual consistency model? Choose 2.

  1. Overwrite PUTs of existing object
  2. PUTs of new object
  3. GET
  4. DELETEs
A
  1. Overwrite PUTs of existing object
  2. PUTs of new object
  3. GET
  4. DELETEs
177
Q

For which operation S3 offers read after write consistency model?

  1. Overwrite PUTs of existing object
  2. PUTs of new object
  3. GET
  4. DELETEs
A
  1. Overwrite PUTs of existing object
  2. PUTs of new object
  3. GET
  4. DELETEs
178
Q

To add another layer of security you have enabled MFA (multi-factor authentication) Delete for a bucket. Which operation will require additional authentication? Choose 2.

  1. To Create a new object
  2. To Update object ACL
  3. To Change the versioning state of your bucket
  4. To Permanently delete an object version
A
  1. To Create a new object
  2. To Update object ACL
  3. To Change the versioning state of your bucket
  4. To Permanently delete an object version
179
Q

Which of the following can be used to store files? Choose 3.

  1. S3
  2. EBS
  3. EMR
  4. EFS
  5. RDS MySQL
A
  1. S3
  2. EBS
  3. EMR
  4. EFS
  5. RDS MySQL
180
Q

How can you securely upload/download your data to Amazon S3?

  1. SSL endpoints using the HTTP protocol
  2. SSL endpoints using the HTTPS protocol
  3. VPC endpoints using the HTTP protocol
  4. VPC endpoints using the HTTPS protocol
A
  1. SSL endpoints using the HTTP protocol
  2. SSL endpoints using the HTTPS protocol
  3. VPC endpoints using the HTTP protocol
  4. VPC endpoints using the HTTPS protocol
181
Q

How can you troubleshoot slow downloads from or uploads to Amazon Simple Storage Service (Amazon S3)? When you download from or upload to Amazon S3 from a specific network or machine, your requests might get higher latency. How can you diagnose the high latency? Choose 3.

  1. Test the impact of geographical distance between the client and the S3 bucket.
  2. There might be latency introduced in your application or how your host that’s making the requests is handling the requests sent and responses received.
  3. Check the storage class for the object.
  4. Supported request rate per prefix may be exceeded.
A
  1. Test the impact of geographical distance between the client and the S3 bucket.
  2. There might be latency introduced in your application or how your host that’s making the requests is handling the requests sent and responses received.
  3. Check the storage class for the object.
  4. Supported request rate per prefix may be exceeded.
182
Q

Your company is adopting AWS cloud platform for all new application development as well as migrating current on-premise application. One of the strategy is to leverage S3 for storage. What are the performance design patterns you should follow? Choose 4.

  1. Using Caching for Frequently Accessed Content with Amazon CloudFront, Amazon ElastiCache, or Amazon S3 Transfer Acceleration
  2. Using Caching for Frequently Accessed Content with Amazon CloudFront, Amazon ElastiCache, or AWS Elemental MediaStore
  3. Timeouts and Retries for Latency-Sensitive Applications
  4. Horizontal Scaling and Request Parallelization for High Throughput
  5. Using Amazon S3 Transfer Acceleration to Accelerate Geographically Disparate Data Transfers
  6. Using Amazon CloudFront to Accelerate Geographically Disparate Data Transfers
A
  1. Using Caching for Frequently Accessed Content with Amazon CloudFront, Amazon ElastiCache, or Amazon S3 Transfer Acceleration
  2. Using Caching for Frequently Accessed Content with Amazon CloudFront, Amazon ElastiCache, or AWS Elemental MediaStore
  3. Timeouts and Retries for Latency-Sensitive Applications
  4. Horizontal Scaling and Request Parallelization for High Throughput
  5. Using Amazon S3 Transfer Acceleration to Accelerate Geographically Disparate Data Transfers
  6. Using Amazon CloudFront to Accelerate Geographically Disparate Data Transfers
183
Q

Which of the following is not a performance guideline for S3?

  1. Combine Amazon S3 (Storage) and Amazon EC2 (Compute) in the Same AWS Region
  2. Use Amazon S3 Transfer Acceleration to Minimize Latency Caused by Distance
  3. Randomizing prefix naming with hashed characters to optimize performance for frequent data retrievals
  4. Using the Range HTTP header in a GET Object request, fetch a byte-range from an object, transferring only the specified portion.
A
  1. Combine Amazon S3 (Storage) and Amazon EC2 (Compute) in the Same AWS Region
  2. Use Amazon S3 Transfer Acceleration to Minimize Latency Caused by Distance
  3. Randomizing prefix naming with hashed characters to optimize performance for frequent data retrievals
  4. Using the Range HTTP header in a GET Object request, fetch a byte-range from an object, transferring only the specified portion.
184
Q

Which of the following is not a way to improve the transfer speeds for copying data between S3 bucket and EC2 instance? Choose 2.

  1. Use enhanced networking on the EC2 instance.
  2. Use parallel workloads for the data transfer.
  3. EC2 and S3 bucket should be in different region.
  4. Use an Amazon Virtual Private Cloud (Amazon VPC) endpoint for Amazon S3.
  5. Use S3 Transfer Acceleration between geographically distant AWS Regions.
  6. Use EC2 in cluster placement group.
  7. Upgrade your EC2 instance type.
  8. Use chunked transfers.
A
  1. Use enhanced networking on the EC2 instance.
  2. Use parallel workloads for the data transfer.
  3. EC2 and S3 bucket should be in different region.
  4. Use an Amazon Virtual Private Cloud (Amazon VPC) endpoint for Amazon S3.
  5. Use S3 Transfer Acceleration between geographically distant AWS Regions.
  6. Use EC2 in cluster placement group.
  7. Upgrade your EC2 instance type.
  8. Use chunked transfers.
185
Q

What are the features of S3 Batch operations? Choose 3.

  1. To perform large-scale batch operations on Amazon S3 objects and can execute a single operation on lists of Amazon S3 objects that you specify.
  2. Can be used to copy objects and set object tags or access control lists (ACLs).
  3. You can also initiate object restores from Amazon S3 Glacier or invoke an AWS Lambda function to perform custom actions using your objects.
  4. To perform ETL jobs on data stored in S3.
A
  1. To perform large-scale batch operations on Amazon S3 objects and can execute a single operation on lists of Amazon S3 objects that you specify.
  2. Can be used to copy objects and set object tags or access control lists (ACLs).
  3. You can also initiate object restores from Amazon S3 Glacier or invoke an AWS Lambda function to perform custom actions using your objects.
  4. To perform ETL jobs on data stored in S3.
186
Q

You can’t access a certain prefix or object that’s in your Amazon Simple Storage Service (Amazon S3) bucket but can access the rest of the data in the bucket. What should you verify as the reason? Choose 4.

  1. Security Group and NACL setting
  2. Ownership of the prefix or object
  3. Restrictions in the bucket policy
  4. Restrictions in your AWS Identity and Access Management (IAM) user policy
  5. Permissions to object encrypted by AWS Key Management Service (AWS KMS)
A
  1. Security Group and NACL setting
  2. Ownership of the prefix or object
  3. Restrictions in the bucket policy
  4. Restrictions in your AWS Identity and Access Management (IAM) user policy
  5. Permissions to object encrypted by AWS Key Management Service (AWS KMS)
187
Q

You want to enable default encryption using AWS Key Management Service (AWS KMS) on your Amazon Simple Storage Service (Amazon S3) bucket. You already have objects stored in the bucket. If you enable default encryption, what happens to the encryption of existing objects? Choose 2.

  1. Enabling default encryption doesn’t change the encryption of objects that are already in the bucket, the encryption that you set applies only to future uploads.
  2. Enabling default encryption will change the encryption of existing objects as well as future uploads.
  3. Any objects already encrypted using Amazon S3-managed keys (SSE-S3) they are re-encrypted with KMS.
  4. Any unencrypted objects already in the bucket remain unencrypted or any objects already encrypted using Amazon S3-managed keys (SSE-S3) remain encrypted with SSE-S3.
A
  1. Enabling default encryption doesn’t change the encryption of objects that are already in the bucket, the encryption that you set applies only to future uploads.
  2. Enabling default encryption will change the encryption of existing objects as well as future uploads.
  3. Any objects already encrypted using Amazon S3-managed keys (SSE-S3) they are re-encrypted with KMS.
  4. Any unencrypted objects already in the bucket remain unencrypted or any objects already encrypted using Amazon S3-managed keys (SSE-S3) remain encrypted with SSE-S3.
188
Q

What are the key differences between the Amazon REST API endpoint and the website endpoint? Choose 3.

  1. REST API Endpoint Supports both public and private content, Website Endpoint Supports only publicly readable content.
  2. REST API Endpoint Supports only publicly readable content, Website Endpoint Supports both public and private content.
  3. REST API Endpoint Supports SSL connections. Website Endpoint Does not support SSL connections.
  4. REST API Endpoint does not Support SSL connections. Website Endpoint support SSL connections
  5. REST API Endpoint Supports all bucket and object operations, Website Endpoint Supports only GET and HEAD requests on objects.
A
  1. REST API Endpoint Supports both public and private content, Website Endpoint Supports only publicly readable content.
  2. REST API Endpoint Supports only publicly readable content, Website Endpoint Supports both public and private content.
  3. REST API Endpoint Supports SSL connections. Website Endpoint Does not support SSL connections.
  4. REST API Endpoint does not Support SSL connections. Website Endpoint support SSL connections
  5. REST API Endpoint Supports all bucket and object operations, Website Endpoint Supports only GET and HEAD requests on objects.
189
Q

What are differences between EFS and EBS? Choose 3.

  1. EFS: Data is stored redundantly across multiple AZs. EBS: Data is stored redundantly in a single AZ.
  2. EBS: Data is stored redundantly across multiple AZs. EFS: Data is stored redundantly in a single AZ.
  3. EBS: Up to thousands of Amazon EC2 instances, from multiple AZs, can connect concurrently to an EBS volume. EFS: A single Amazon EC2 instance in a single AZ can connect to a file system.
  4. EFS Use cases: Big data and analytics, media processing workflows, content management, web serving, and home directories. EBS use cases: Boot volumes, transactional and NoSQL databases, data warehousing, and ETL.
  5. EFS: Up to thousands of Amazon EC2 instances, from multiple AZs, can connect concurrently to a file system. EBS: A single Amazon EC2 instance in a single AZ can connect to an EBS volume.
A
  1. EFS: Data is stored redundantly across multiple AZs. EBS: Data is stored redundantly in a single AZ.
  2. EBS: Data is stored redundantly across multiple AZs. EFS: Data is stored redundantly in a single AZ.
  3. EBS: Up to thousands of Amazon EC2 instances, from multiple AZs, can connect concurrently to an EBS volume. EFS: A single Amazon EC2 instance in a single AZ can connect to a file system.
  4. EFS Use cases: Big data and analytics, media processing workflows, content management, web serving, and home directories. EBS use cases: Boot volumes, transactional and NoSQL databases, data warehousing, and ETL.
  5. EFS: Up to thousands of Amazon EC2 instances, from multiple AZs, can connect concurrently to a file system. EBS: A single Amazon EC2 instance in a single AZ can connect to an EBS volume.
190
Q

Which EFS performance mode you will choose for latency-sensitive use cases, like web serving environments, content management systems, home directories, and general file serving?

  1. Default Performance Mode
  2. Max IOPS performance Mode
  3. General Purpose Performance Mode
  4. Max I/O Performance Mode
A
  1. Default Performance Mode
  2. Max IOPS performance Mode
  3. General Purpose Performance Mode
  4. Max I/O Performance Mode
191
Q

Which EFS performance mode you will choose highly parallelized applications and workloads, such as big data analysis, media processing, and genomics analysis?

  1. Default Performance Mode
  2. Max IOPS performance Mode
  3. General Purpose Performance Mode
  4. Max I/O Performance Mode
A
  1. Default Performance Mode
  2. Max IOPS performance Mode
  3. General Purpose Performance Mode
  4. Max I/O Performance Mode
192
Q

What are the two throughput modes to choose for your EFS?

  1. IOPS Mode
  2. Bursting Mode
  3. Provisioned Mode
  4. I/O Mode
A
  1. IOPS Mode
  2. Bursting Mode
  3. Provisioned Mode
  4. I/O Mode
193
Q

Which of the following statements are true? Choose 2.

  1. You can’t use Amazon EFS with Microsoft Windows–based Amazon EC2 instances.
  2. You can’t use Amazon EFS with Linux–based Amazon EC2 instances.
  3. You can use Amazon EFS with Microsoft Windows–based Amazon EC2 instances.
  4. You can use Amazon EFS with Linux–based Amazon EC2 instances.
A
  1. You can’t use Amazon EFS with Microsoft Windows–based Amazon EC2 instances.
  2. You can’t use Amazon EFS with Linux–based Amazon EC2 instances.
  3. You can use Amazon EFS with Microsoft Windows–based Amazon EC2 instances.
  4. You can use Amazon EFS with Linux–based Amazon EC2 instances.
194
Q

You are migrating your on premise Windows-based custom build .Net applications to AWS cloud platform using Lift-and-Shift strategy. These applications require shared file storage provided by Windows-based file systems (NTFS) and that uses the SMB protocol. Which AWS services you will use? Choose 2.

  1. Lambda
  2. EFS
  3. EBS
  4. EC2
  5. FSx for Windows File Server
A
  1. Lambda
  2. EFS
  3. EBS
  4. EC2
  5. FSx for Windows File Server
195
Q

Which of the following are correct statements as when should you use Amazon FSx Windows File Servers vs. Amazon EFS vs. Amazon FSx for Lustre? Choose 3.

  1. For Windows-based applications, Amazon FSx provides fully managed Windows file servers with features and performance optimized for “lift-and-shift” business-critical application workloads including home directories (user shares), media workflows, and ERP applications via SMB protocol.
  2. If you have Linux-based applications, Amazon EFS is a cloud-native fully managed file system that provides simple, scalable, elastic file storage accessible from Linux instances via the NFS protocol.
  3. For compute-intensive and fast processing workloads, like high performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that’s optimized for performance, with input and output stored on Amazon S3.
  4. If you have Windows-based applications, Amazon EFS is a cloud-native fully managed file system that provides simple, scalable, elastic file storage accessible from EC2 windows instances via the NFS protocol
A
  1. For Windows-based applications, Amazon FSx provides fully managed Windows file servers with features and performance optimized for “lift-and-shift” business-critical application workloads including home directories (user shares), media workflows, and ERP applications via SMB protocol.
  2. If you have Linux-based applications, Amazon EFS is a cloud-native fully managed file system that provides simple, scalable, elastic file storage accessible from Linux instances via the NFS protocol.
  3. For compute-intensive and fast processing workloads, like high performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that’s optimized for performance, with input and output stored on Amazon S3.
  4. If you have Windows-based applications, Amazon EFS is a cloud-native fully managed file system that provides simple, scalable, elastic file storage accessible from EC2 windows instances via the NFS protocol
196
Q

What instance types and OS versions can connect to FSx on windows server? Choose 3.

  1. IBM I, MacOS, OS 400
  2. Amazon EC2, VMware Cloud on AWS,
  3. Amazon WorkSpaces, and Amazon AppStream 2.0 instances
  4. Windows Server 2008 and Windows 7, and current versions of Linux
A
  1. IBM I, MacOS, OS 400
  2. Amazon EC2, VMware Cloud on AWS,
  3. Amazon WorkSpaces, and Amazon AppStream 2.0 instances
  4. Windows Server 2008 and Windows 7, and current versions of Linux
197
Q

You are designing a business critical custom .Net application running on Windows server EC2 instance which access a files system created by Amazon FSx for windows server. You are provisioning EC2 instances in an auto-scale group across Multi AZs for availability and fault tolerance. How can you ensure high availability and durability of FSx file systems? Choose 3.

  1. Amazon FSx automatically replicates your data within an Availability Zone (AZ) to protect it from component failure, continuously monitors for hardware failures, and automatically replaces infrastructure components in the event of a failure.
  2. Create a Read Replica file system, which provides performance and redundancy across multiple AZs.
  3. Create a Multi-AZ file system, which provides redundancy across multiple AZs.
  4. Amazon FSx also takes highly durable backups (stored in S3) of your file system daily using Windows’s Volume Shadow Copy Service, and allows you to take additional backups at any point.
A
  1. Amazon FSx automatically replicates your data within an Availability Zone (AZ) to protect it from component failure, continuously monitors for hardware failures, and automatically replaces infrastructure components in the event of a failure.
  2. Create a Read Replica file system, which provides performance and redundancy across multiple AZs.
  3. Create a Multi-AZ file system, which provides redundancy across multiple AZs.
  4. Amazon FSx also takes highly durable backups (stored in S3) of your file system daily using Windows’s Volume Shadow Copy Service, and allows you to take additional backups at any point.
198
Q

Which of the following is not a feature of FSx for windows server Multi-AZ files systems?

  1. Automatically replicates your data within an Availability Zone (AZ) to protect it from component failure, continuously monitors for hardware failures, and automatically replaces infrastructure components in the event of a failure.
  2. Automatically provisions and maintains a standby file server in a different Availability Zone.
  3. Any changes written to disk in your file system are synchronously replicated across AZs to the standby.
  4. In the event of planned file system maintenance or unplanned service disruption, Amazon FSx automatically fails over to the secondary file server
  5. Any changes written to disk in your file system are asynchronously replicated across AZs to the standby.
A
  1. Automatically replicates your data within an Availability Zone (AZ) to protect it from component failure, continuously monitors for hardware failures, and automatically replaces infrastructure components in the event of a failure.
  2. Automatically provisions and maintains a standby file server in a different Availability Zone.
  3. Any changes written to disk in your file system are synchronously replicated across AZs to the standby.
  4. In the event of planned file system maintenance or unplanned service disruption, Amazon FSx automatically fails over to the secondary file server
  5. Any changes written to disk in your file system are asynchronously replicated across AZs to the standby.
199
Q

What events would cause a Multi-AZ Amazon FSx file system to initiate a failover to the standby file server? Choose 3.

  1. File Server EC2 instance client goes down.
  2. An Availability Zone outage occurs.
  3. The preferred file server becomes unavailable.
  4. The preferred file server undergoes planned maintenance.
A
  1. File Server EC2 instance client goes down.
  2. An Availability Zone outage occurs.
  3. The preferred file server becomes unavailable.
  4. The preferred file server undergoes planned maintenance.
200
Q

You have connected windows client and Linux client to a Multi-AZ Amazon FSx file system. What will happen if the preferred file server becomes unavailable? Choose 3.

  1. For windows client there will be automatic failover without manual intervention from the preferred file server to the standby file server.
  2. Linux clients do not support automatic DNS-based failover. Therefore, they don’t automatically connect to the standby file server during a failover.
  3. For Linux client there will be automatic failover without manual intervention from the preferred file server to the standby file server.
  4. For windows client after the resources in the preferred subnet are available, Amazon FSx automatically fails back to the preferred file server in the preferred subnet.
A
  1. For windows client there will be automatic failover without manual intervention from the preferred file server to the standby file server.
  2. Linux clients do not support automatic DNS-based failover. Therefore, they don’t automatically connect to the standby file server during a failover.
  3. For Linux client there will be automatic failover without manual intervention from the preferred file server to the standby file server.
  4. For windows client after the resources in the preferred subnet are available, Amazon FSx automatically fails back to the preferred file server in the preferred subnet.
201
Q

What are the two options for using your Amazon FSx for Windows File Server file system with Active Directory?

  1. Using Amazon FSx with AWS AD Connector
  2. Using Amazon FSx with AWS Directory Service for Microsoft Active Directory
  3. Using Amazon FSx with Your Self-Managed Microsoft Active Directory.
  4. Using Amazon FSx with AWS Simple AD
A
  1. Using Amazon FSx with AWS AD Connector
  2. Using Amazon FSx with AWS Directory Service for Microsoft Active Directory
  3. Using Amazon FSx with Your Self-Managed Microsoft Active Directory.
  4. Using Amazon FSx with AWS Simple AD
202
Q

How can you optimize total cost of ownership of FSx for windows server?

  1. Turning on data deduplication
  2. Turning on data compression
  3. Turning on data intelligent tie
  4. Turning on FSx Analysis
A
  1. Turning on data deduplication
  2. Turning on data compression
  3. Turning on data intelligent tie
  4. Turning on FSx Analysis
203
Q

How much data can you store in one file system of Amazon FSx for Windows File Server?

  1. 10 TB
  2. 16 TB
  3. 32 TB
  4. 64 TB
A
  1. 10 TB
  2. 16 TB
  3. 32 TB
  4. 64 TB
204
Q

In your company multiple departments have created their own file share using Amazon FSx for window server. There are separate file shares for marketing, finance, sales, supply chain and HR. How can you unify access to your file shares across multiple file systems and also improve performance?

  1. Delete and merge all separate file systems in single file system.
  2. Use DFS Namespaces to group file shares on multiple file systems into one common folder structure (a namespace)
  3. Use Multi-AZ deployment in all the file systems.
  4. Use Read Replicas in all the files systems.
A
  1. Delete and merge all separate file systems in single file system.
  2. Use DFS Namespaces to group file shares on multiple file systems into one common folder structure (a namespace)
  3. Use Multi-AZ deployment in all the file systems.
  4. Use Read Replicas in all the files systems.
205
Q

What are the benefits of using AWS DataSync? Choose 2.

  1. Your data can be used on-premises and stored durably in AWS Cloud storage services, including Amazon S3, Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, and Amazon EBS.
  2. Easy for you to move data over the network between on-premises storage and AWS.
  3. Transfer data rapidly over the network into AWS, up to 10 times faster than is common with open-source tooling.
  4. A single device can transport multiple terabytes of data and multiple devices can be used in parallel to transfer petabytes of data into or out of an Amazon S3 bucket
A
  1. Your data can be used on-premises and stored durably in AWS Cloud storage services, including Amazon S3, Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, and Amazon EBS.
  2. Easy for you to move data over the network between on-premises storage and AWS.
  3. Transfer data rapidly over the network into AWS, up to 10 times faster than is common with open-source tooling.
  4. A single device can transport multiple terabytes of data and multiple devices can be used in parallel to transfer petabytes of data into or out of an Amazon S3 bucket
206
Q

What are the main use cases for AWS DataSync? Choose 3.

  1. Data migration – Move active datasets rapidly over the network into Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server.
  2. Data movement for timely in-cloud processing – Move data into or out of AWS for processing when working with systems that generate data on-premises.
  3. Data archiving – Move cold data from expensive on-premises storage systems directly to durable and secure long-term storage such as Amazon S3 Glacier or S3 Glacier Deep Archive.
  4. Hybrid cloud workloads: manage hybrid file and object workloads that run across both your organization and the AWS Cloud.
A
  1. Data migration – Move active datasets rapidly over the network into Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server.
  2. Data movement for timely in-cloud processing – Move data into or out of AWS for processing when working with systems that generate data on-premises.
  3. Data archiving – Move cold data from expensive on-premises storage systems directly to durable and secure long-term storage such as Amazon S3 Glacier or S3 Glacier Deep Archive.
  4. Hybrid cloud workloads: manage hybrid file and object workloads that run across both your organization and the AWS Cloud.
207
Q

You are planning a strategy to migrate over 600 terabytes (TB) of data from on-premises storage system to Amazon S3 and Amazon EFS. You don’t want to use other AWS offline data transfer services. You need to move data from their on-premises storage to AWS via Direct Connect or VPN, without traversing the public internet, to further increase the security of the copied data. Which AWS service you will use?

  1. AWS Snowball
  2. AWS Snowball Edge
  3. AWS Snowmobile
  4. AWS DataSync
  5. AWS AppSync
A
  1. AWS Snowball
  2. AWS Snowball Edge
  3. AWS Snowmobile
  4. AWS DataSync
  5. AWS AppSync
208
Q

You are planning to use AWS DataSync to migrate on-premise data to S3 storage in cloud. How can you ensure data transferred between your AWS DataSync agent deployed on-premises or in-cloud, doesn’t traverse the public internet? Choose 2.

  1. Utilize public service endpoints in their respective AWS Regions (such as datasync.us-east-1.amazonaws.com)
  2. Use VPC Internet Gateway
  3. Use VPC Endpoints
  4. From on-premise have Direct Connect or VPN to your VPC
A
  1. Utilize public service endpoints in their respective AWS Regions (such as datasync.us-east-1.amazonaws.com)
  2. Use VPC Internet Gateway
  3. Use VPC Endpoints
  4. From on-premise have Direct Connect or VPN to your VPC
209
Q

When do you use AWS DataSync and when do you use AWS Snowball Edge? Choose 2.

  1. AWS DataSync is ideal for online data transfers.
  2. AWS Snowball Edge is suitable for offline data transfers
  3. AWS Snowball Edge is ideal for online data transfers.
  4. AWS DataSync is suitable for offline data transfers
A
  1. AWS DataSync is ideal for online data transfers.
  2. AWS Snowball Edge is suitable for offline data transfers
  3. AWS Snowball Edge is ideal for online data transfers.
  4. AWS DataSync is suitable for offline data transfers
210
Q

In which of the following scenario you should not use DataSync?

  1. If you want to transfer data from existing storage systems (e.g. Network Attached Storage), If you want to transfer data from instruments that cannot be changed (e.g. DNA sequencers, video cameras)
  2. If your applications are already integrated with the Amazon S3 API, and you want higher throughput for transferring large files to S3.
  3. If you want to automate moving data to multiple destinations with built-in retry and network resiliency mechanisms, data integrity verification
A
  1. If you want to transfer data from existing storage systems (e.g. Network Attached Storage), If you want to transfer data from instruments that cannot be changed (e.g. DNA sequencers, video cameras)
  2. If your applications are already integrated with the Amazon S3 API, and you want higher throughput for transferring large files to S3.
  3. If you want to automate moving data to multiple destinations with built-in retry and network resiliency mechanisms, data integrity verification
211
Q

For an Amazon EBS HDD volumes, a single 1,024 KiB I/O operation will be counted as how many operation/s?

  1. 1 operation
  2. 16 operations
  3. 32 operations
  4. 4 operations
A
  1. 1 operation
  2. 16 operations
  3. 32 operations
  4. 4 operations
212
Q

What are the features of Amazon FSx for Lustre? Choose 3

  1. Integrates with Amazon RDS, making it easy to process data sets with the Lustre file system.
  2. As a fully managed service, Amazon FSx for Lustre enables you to launch and run the world’s most popular high-performance file system. Lustre file systems for any workload where storage speed matters.
  3. Integrates with Amazon S3, making it easy to process data sets with the Lustre file system.
  4. It is POSIX-compliant, so you can use your current Linux-based applications without having to make any changes.
  5. It is POSIX-compliant, so you can use your current Windows-based applications without having to make any changes.
A
  1. Integrates with Amazon RDS, making it easy to process data sets with the Lustre file system.
  2. As a fully managed service, Amazon FSx for Lustre enables you to launch and run the world’s most popular high-performance file system. Lustre file systems for any workload where storage speed matters.
  3. Integrates with Amazon S3, making it easy to process data sets with the Lustre file system.
  4. It is POSIX-compliant, so you can use your current Linux-based applications without having to make any changes.
  5. It is POSIX-compliant, so you can use your current Windows-based applications without having to make any changes.
213
Q

What are the suitable use cases for using Amazon FSx for Lustre? Choose 5.

  1. Media processing and transcoding
  2. Machine learning
  3. Click stream analysis
  4. ETL Jobs
  5. High performance computing
  6. Autonomous Vehicles
  7. Electronic Design Automation (EDA)
A
  1. Media processing and transcoding
  2. Machine learning
  3. Click stream analysis
  4. ETL Jobs
  5. High performance computing
  6. Autonomous Vehicles
  7. Electronic Design Automation (EDA)
214
Q

When should you use EFS vs FSx for Windows vs FSx for Lustre? Choose 3.

  1. Use EFS, for Windows Applications and Windows instances when you need simple, scalable, fully managed elastic NFS file.
  2. Use FSx for Windows File Server, for Linux based application when you need centralized storage having native support for POSIX file system features and support for network access through industry-standard Server Message Block (SMB) protocol.
  3. Use EFS, for Linux Applications and Linux instances when you need simple, scalable, fully managed elastic NFS file.
  4. Use FSx for Windows File Server, for Windows based application when you need centralized storage having native support for Windows file system features and support for network access through industry-standard Server Message Block (SMB) protocol.
  5. Use FSx for Lustre, when you need to launch and run the popular, high-performance Lustre file system for workloads where speed matters, such as machine learning, high performance computing (HPC), video processing, and financial modeling.
A
  1. Use EFS, for Windows Applications and Windows instances when you need simple, scalable, fully managed elastic NFS file.
  2. Use FSx for Windows File Server, for Linux based application when you need centralized storage having native support for POSIX file system features and support for network access through industry-standard Server Message Block (SMB) protocol.
  3. Use EFS, for Linux Applications and Linux instances when you need simple, scalable, fully managed elastic NFS file.
  4. Use FSx for Windows File Server, for Windows based application when you need centralized storage having native support for Windows file system features and support for network access through industry-standard Server Message Block (SMB) protocol.
  5. Use FSx for Lustre, when you need to launch and run the popular, high-performance Lustre file system for workloads where speed matters, such as machine learning, high performance computing (HPC), video processing, and financial modeling.