Section 14: Amazon S3 Security Flashcards
About how to encrypt objects in S3 buckets,
Which of the following is fake?
You can encrypt objects in S3 buckets using:
* A) Server Side Encryption (SSE). SSE with S3 Managed Keys (SSE-S3) is enabled by default.
* B) SSE with KMS keys stored in aws kms (SSE-KMS). Leverage aws key management service with manage encryption keys.
* C) SSE with Customer Provided Keys (SSE-C). when you want to manage your own encryption keys
* D) SSE with Certificate authority (CA) Cert. when you want to use a CA to manage encryption keys.
* E) Client side encryption: encrypt everything client side and then upload to S3
D is fake. There is no true version of that one.
About SSE-S3. Which, if any, is false and what is true version?
- A) Server Side Encryption (SSE). SSE with S3 Managed Keys (SSE-S3) is enabled by default for new buckets and new objects.
- B) encyrption type AES512
- C) must set header to “x-amz-server-side-encryption”:”AES512”
- D) user -> upload file -> object under S3 -> s3 pairs it with S3 owned key. perform encryption with s3 object + s3 owned key -> store object in s3 bucket
possibly important exam
B and C are false. True versions are:
* B) encyrption type AES256
* C) must set header to “x-amz-server-side-encryption”:”AES256”
About SSE-KMS. Which, if any, is false (or missing critical information) and what is true version?
- A) SSE with KMS keys stored in aws kms (SSE-KMS). Leverage aws key management service with manage encryption keys.
- B) user control + audit key usage using CloudTrail
- C) Must set header “x-amz-server-side-encryption”:”aws:kms”
- D) user -> upload -> in s3 (object + aws kms ) -> encryption -> bucket.
- E) to see the object, you must have access to the object in the S3 bucket.
possibly important exam
E is missing info. correct version:
About SSE-KMS. Which, if any, is false (or missing critical information) and what is true version?
About SSE-KMS Limitations. Which, if any, is false and what is the true version?
- A) may be impacted by KMS limits
- B) when you upload, it calls the GenerateDataKey KMS API
- C) when you download, it calls the Decrypt KMS API
- D) Count towards the KMS quota per second (5500, 10000, 30000 req/s based on region)
- E) you can request a quota increase using the Service Quotas Console
- F) if you have a very high S3 bucket and everything is encrypted using KMS keys you may go into a thread link kind of case (no idea what this means).
possibly important exam
All true. About F i think he said that this is something the exam may test you on.
About SSE-C. Which, if any, is false (or missing critical information) and what is true version?
- SSE with Customer Provided Keys (SSE-C). when you want to manage your own encryption keys
possibly important exam
About client side encryption Which, if any, is false (or missing critical information) and what is true version?
- A) Client side encryption: encrypt everything client side and then upload to S3
- B) use client libraries such as Amazon S3 Client Side encryption library
- C) clients must encrypt data before sending to S3
- D) AWS will send decrypted info when sending info to client
- E) customer fully manages keys and encryption code
possibly important exam
D is false. Client must decrypt data themselves when retrieving data from S3.
T/F Encryption in transit (SSL/TLS)
* encryption in flight is also called SSL/TLS
* S3 exposes both HTTP and HTTPS, but HTTPS is reccommended and is mandatory for SSE-C but most people use HTTPS by default now
* how to force encryption in transit
aws:SecureTransport.
All T
Is this a correct example of how to force encryption in transit (using HTTPS) for all objects in your S3 bucket?
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Deny”,
“Principal”: “”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::my-bucket/”
“Condition”: {
“Bool”: {
“aws:SecureTransport”: “true”
}
}
}
]
}
True. It would have been false if
“aws:SecureTransport”: “true”
was set to “false” (as it would be for http?)
rea
T/F
creating your own kms key does cost you some money every month
T
Default Encruption
* A_ SSE-S3 applied auotmatically to new upejects stored in S3 (unless you say otherwise)
* B) you can force encryption using a bucket policy and refuse any api call to put an s3 object without encryption headers (SSE-KMS or SSE-C).
* C) Default encryption settings evaluated before bucket policies
C is false. bucket policies are evaluated before default encryption settings. (though I don’t know if it’s implying a priority. possibly>)
Subsection: S3 Default Encryption
T/F CORS
* A) cross origin resource sharing
* B) origin = scheme (protocol) + host (domain) + port
* C) ex: in https://www.example.com the implied port is 443 for HTTPS. The domain is www.example.com and the protocol is HTTPS. And altogether, that makes the origin.
* D) http://example.com/app1 and https://example.com/app2 have the same origin
* E) http://example.com/app1 and http://other.example.com/app2 have different origins (note the different domains)
* F) If two origins are different, requests won’t be fulfilled unless the other origin allows for the requests using CORS Headers (ex: Access-Control-Allow-Origin)
D is false. Correct version is:
http://example.com/app1 and http://example.com/app2 have the same origin
In the question, the bottom address used https.
Subsection: CORS
- Say we’re on firefox. We make an https request to get to https://www.example.com (web server origin). The index.html file that was retreived from https://www.example.com is going to say “hey, i need some images from another site, https://www.other.com.” https://www.other.com is our cross-origin web server. We’re calling it cross origin because it’s like the secondary concern in our attempt to get https://www.example.com to work nicely.
- So, the web browser we’re on (firefox) has security built in and is going to do a pre-flight request to the cross origin. It (firefox) is going to say “I’d like the options (which apparently means Host (www.other.com) and Origin (https://www.example.com). I might not be understanding if Options is really host and origin, but those values for host and origin are, I think, correct. They just might not be called ‘Options’.
- Next, our cross origin web server (the one with the picture, https://www.other.com, though things are getting fairly fuzzy now), if it is configured to use Cross Origin Resource Sharing with https://www.example.com, is going to say “Yes, I do allow this origin, https://www.example.com” (which is says by sending back in the pre-flight response header Access-Control-Allow-Origin with a value of https://www.example.com)), to do GET, PUT, DELETE (which it says by sending in the preflight response header Access-Control-Allow-Methods with a value of “GET, PUT, DELETE” ). And, in that situation, those are the CORS headers (Access-Control-Allow-Origin and Access-Control-Allow-Methods)
- If the web browser is happy with these CORS headers, then the web browser (firefox) gets to make a request to the other server (i think https://www.other.com) to retrieve the pictures that our index.html is waiting for
True
- if a client makes a cross origin request on our s3 bucket, we need to enable the correct CORS headers.
- You can allow for a specific origin or use * (for all origins)
popular exam qusetion
True
T/F, assuming everything is set up correctly (static website hosting enabled, block-public-access off, good bucket policy that allows everyone to GET the objects in a bucket), the following CORS configuration should allow your content from site 2 to be read and used by your site 1 (assuming site 1 is named whatever is in the AllowedOrigins list)
[
{
“AllowedHeaders”: [
“Authorization”
],
“AllowedMethods”: [
“GET”
],
“AllowedOrigins”: [
“https://2023-283-tuesday-s3.s3.us-east-2.amazonaws.com/index-with-fetch-and-cors.html”
],
“ExposeHeaders”: [],
“MaxAgeSeconds”: 3000
}
]
True
T/F (and provide corrected versions, if appropriate). If you turn on MFA for an S3 bucket, then MFA is required to
* A) permanently delete an object
* B) suspend versioning on the bucket
* C) enable versioning
* D) list deleted versoining
C and D are false. Even if you have MFA enabled for an S3 bucket, you still won’t need to use MFA to do those things.
T/F (and provide corrected versions, if appropriate). If you turn on MFA for an S3 bucket, then MFA is required to
- A) Versioning does not have to be enabled on the bucket to use MFA delete
- B) Anyone with appropriate IAM policies (access to the bucket) can enable/disbable MFA delete
Both are false! Here are the correct versions:
- A) To use MFA Delete, Versioning must be enabled on the bucket
- B) Only the bucket owner (root account) can enable/disbable MFA delete
Would this work if I was non-root? What if I was non-root?
aws s3api put-bucket-versioning –bucket somethingsomething1 –versioning-configuration Status=Enabled,MFADelete=Enabled –mfa “arn:aws:iam::some-real-value 864127” –profile some-real-cli-profile
It would work if you were root if you had also set up mfa for your root account. it would not work otherwise. It is, at time of writing, the only known way of enabling MFADelete for an S3 bucket.
T/F
If you have MFA delete enabled for a bucket (don’t forget that versioning needs to be on prior to setting up MFA delete) then you can’t actually permanelty delete something using the UI. You have to use something else (aws cli, aws sdk, or s3 rest api (or remove mfa delete ability))
True
What happened when you tried to enable MFADelete from a non-root account, assuming you set everything up correctly.
jack all. got a permissions issue. It really does have to be a root account, root mfa, root access key, cli profile made with the root access key (so it has root permissions).
if any are false, what is/are the true version(s)?
S3 Access logs
* A) may want to log all access for audit purpsoes
* B) any request made to s3 will be logged into another s3 bucket
* C) data can be analuzed
* D) target logging bucket can be in any aws region
D) is false.
True version: target logging bucket must be in the same aws region as the bucket you want the logs for.
log format https://docs/amazon.com/AmazonS3/latest/dev/LogFormat.html
T/F
- A) you can set up S3 Access Logs on the same bucket for which you want the logging done.
- B) turning on logging automatically updates your bucket policy
A is False. If you do this you will create a logging loop and your bucket will grow exponentially.
S3 Access Logs
T/F (one of these is more like a caveat than a true false, which one do you think it is)
- A) can use s3 console, cli or sdk to generate a presigned url
- B) url expiration: console (1 min to 720 mins); cli (max of 168 hours)
- C) users given a pre-signed url that inherit the permissions of the user that generated the URL for the GET / PUT
- D) Good way to give temp access to one file/object
Well, C is true but i suspect it’s missing POST. Steph doesn’t mention it on the slides, but later he does mention that a user can use a presigned url to upload a file, and he doesn’t indicate that the file needs to be merely editing an existing record, so it seems like the upload could be used to create a new record, which would make it a POST, not a PUT (at least according to some definitions of POST. perhaps all of them, idk)
S3 Pre-signed URLS
T/F these are good examples of use cases for S3 presigned urls
* A) allow only logged in users to download a premium video from your S3 bucket
* B) allow an ever changing list of users to download files by generating URLs dyamically
* C) allow temporarily a user to upload a file to a precise location in your S3 bucket (that does seem post-y)
T
S3 Pre-Signed URL
T/F
- A) S3 Access Points can be used to give a big variety of users a variety of accesses to an S3 bucket without making a big mess of an S3 bucket policy.
True
S3 Access Points