Management & Governance Flashcards
Use AWS management tools to monitor, govern, and optimize infrastructure performance and operational efficiency. (19 cards)
A financial institution is designing the architecture for a new data processing platform on AWS. The institution uses organizational units (OUs) in AWS Organizations to manage its accounts. To comply with regulatory requirements, all Amazon EC2 instances must include a compliance-level tag with values of compliant or noncompliant. IAM users must not be allowed to create EC2 instances without this tag or modify the tag after creation.
Which combination of steps will meet these requirements?
(Select TWO.)
- In AWS Organizations, create a service control policy (SCP) to deny the creation of EC2 instances if the compliance-level tag is not specified. Attach the SCP to the appropriate OU.
- In AWS Organizations, create a tag policy to enforce the use of the compliance-level tag with the required values. Attach the tag policy to the appropriate OU to ensure EC2 instances adhere to the tagging requirements.
- Use AWS Config to check for compliance-level tags on EC2 instances. Configure AWS Config to remediate noncompliant resources by automatically adding the required tags to EC2 instances.
- Create an IAM policy that denies the deletion of tags on EC2 instances. Assign this policy to all IAM users who manage EC2 resources in the organization’s accounts.
- Use AWS Lambda with an EventBridge rule to trigger a function whenever a new EC2 instance is created. Configure the function to terminate any instance that does not include the compliance-level tag with the correct values.
1. In AWS Organizations, create a service control policy (SCP) to deny the creation of EC2 instances if the compliance-level tag is not specified. Attach the SCP to the appropriate OU.
2. In AWS Organizations, create a tag policy to enforce the use of the compliance-level tag with the required values. Attach the tag policy to the appropriate OU to ensure EC2 instances adhere to the tagging requirements.
SCPs are used to enforce policies across accounts in an organization. By denying the creation of EC2 instances without the required tag, this SCP ensures that all instances are tagged at creation.
Tag policies enforce tagging standards for AWS resources. By attaching the tag policy to the OU, you ensure that all EC2 instances follow the defined tagging requirements.
- AWS Config cannot prevent resource creation or modification. It can detect noncompliance but requires additional tools like Lambda for remediation, which introduces operational overhead.
- IAM policies apply only at the user or role level and cannot enforce organization-wide restrictions. SCPs and tag policies are better suited for this requirement.
- Using Lambda for real-time remediation introduces operational complexity. SCPs and tag policies are more efficient and simpler to manage.
References:
Save time with our AWS cheat sheets.
A retail company operates a multi-tier application that includes a web server layer running on Amazon EC2 instances and a database layer hosted on Amazon RDS. The company is preparing for an annual sales event and anticipates a significant surge in traffic to its application. The operations team wants to monitor the performance of the EC2 instances and database, analyzing metrics with a granularity of 1 minute to ensure quick detection of bottlenecks during the event.
What should the solutions architect do to meet this requirement?
- Enable detailed monitoring on all EC2 instances and use Amazon CloudWatch metrics for analysis.
- Configure Amazon CloudWatch Logs Insights to aggregate application logs for both the EC2 instances and Amazon RDS. Use Amazon QuickSight for detailed visualization.
- Configure an Amazon CloudWatch Events rule to trigger an AWS Lambda function that collects custom metrics from the EC2 instances and Amazon RDS. Use Amazon CloudWatch dashboards to display the metrics.
- Use AWS Systems Manager to collect logs from the EC2 instances and Amazon RDS. Store the logs in Amazon S3 and use Amazon Athena to query performance data.
1. Enable detailed monitoring on all EC2 instances and use Amazon CloudWatch metrics for analysis.
Detailed monitoring provides metrics at 1-minute intervals, which allows the operations team to quickly detect and analyze potential bottlenecks during the sales event. CloudWatch natively supports metrics for both EC2 and RDS.
- While Logs Insights is useful for log analysis, it does not directly provide 1-minute metric granularity required for real-time monitoring.
- Using CloudWatch detailed monitoring is a simpler and more efficient solution compared to creating custom metrics with Lambda.
- This solution is not suitable for real-time monitoring and involves unnecessary operational overhead.
References:
Save time with our AWS cheat sheets.
A company is launching a new internal platform for managing multiple independent projects. Each project will require its own dedicated AWS account for isolation. The company needs a solution that automates account creation, applies mandatory security guardrails, and centrally manages shared networking resources such as VPNs and subnets for the accounts. The solution must minimize manual effort and ensure compliance with security standards.
Which solution will meet these requirements with the LEAST operational overhead?
- Use AWS Control Tower to automate account provisioning. Create a dedicated networking account with a centralized VPC. Use AWS Resource Access Manager (AWS RAM) to share subnets with project accounts. Enforce security guardrails by using AWS Control Tower guardrails.
- Use AWS Organizations to create project accounts manually. Deploy a VPC in a centralized networking account. Use AWS RAM to share subnets. Manually configure security policies in each account.
- Use AWS Control Tower to set up accounts with pre-configured VPCs in each project account. Connect these VPCs to a central networking account through a transit gateway. Enforce security controls with AWS Config.
- Use AWS Organizations to create accounts for each project. Deploy a shared VPC in a centralized account. Configure AWS Firewall Manager to enforce security controls. Manually configure routing for project account traffic through the shared VPC.
1. Use AWS Control Tower to automate account provisioning. Create a dedicated networking account with a centralized VPC. Use AWS Resource Access Manager (AWS RAM) to share subnets with project accounts. Enforce security guardrails by using AWS Control Tower guardrails.
AWS Control Tower simplifies account setup with built-in security guardrails. It minimizes operational overhead by automating VPC sharing and guardrail enforcement through AWS RAM.
- Manual account setup and security configuration increase operational overhead.
- Configuring a transit gateway and enforcing controls through AWS Config introduces more complexity than required for this use case.
- It requires more manual effort to configure routing and lacks automation in account creation and security enforcement.
References:
An Architect needs to find a way to automatically and repeatably create many member accounts within an AWS Organization. The accounts also need to be moved into an OU and have VPCs and subnets created.
What is the best way to achieve this?
- Use the AWS Organizations API
- Use CloudFormation with scripts
- Use the AWS Management Console
- Use the AWS CLI
2. Use CloudFormation with scripts
The best solution is to use a combination of scripts and AWS CloudFormation. You will also leverage the AWS Organizations API. This solution can provide all of the requirements.
- You can create member accounts with the AWS Organizations API. However, you cannot use that API to configure the account and create VPCs and subnets.
- Using the AWS Management Console is not a method of automatically creating the resources.
- You can do all tasks using the AWS CLI but it is better to automate the process using AWS CloudFormation.
Reference:
How to Use AWS Organizations to Automate End-to-End Account Creation
Save time with our AWS cheat sheets.
A financial services company manages its web application on Amazon EC2 instances. The EC2 instances are registered in an IP address-type target group behind an Application Load Balancer (ALB). The company uses AWS Systems Manager for patching and routine maintenance of the instances.
To meet security compliance requirements, the company must ensure that EC2 instances are temporarily removed from service during patching to prevent serving traffic. During a recent patching attempt, the company experienced application errors and traffic disruptions.
Which combination of solutions will resolve these issues?
(Select TWO.)
- Change the target type of the target group from IP address type to instance type and re-register the instances.
- Use the Systems Manager Maintenance Windows feature to schedule patching and automatically deregister instances from the ALB during updates.
- Implement the AWSEC2-PatchLoadBalancerInstance Systems Manager Automation document to manage the patching process for EC2 instances behind the ALB.
- Configure ALB health checks to automatically remove unhealthy instances during patching. Use Systems Manager Run Command to apply the patches manually.
- Use Systems Manager State Manager to schedule patching jobs and ensure instances are deregistered and re-registered with the ALB after patching is complete.
2. Use the Systems Manager Maintenance Windows feature to schedule patching and automatically deregister instances from the ALB during updates.
3. Implement the AWSEC2-PatchLoadBalancerInstance Systems Manager Automation document to manage the patching process for EC2 instances behind the ALB.
Maintenance Windows coordinate the patching process, including removing instances from the ALB during updates, ensuring compliance and preventing traffic disruptions.
This Automation document automates the removal of instances from the ALB, applies patches, and re-registers the instances after patching. This eliminates manual errors and ensures seamless updates without disrupting application traffic.
- Changing the target type does not address the root cause of the errors. The patching process itself requires proper management of instance removal and re-registration.
- Relying on ALB health checks for patching introduces unnecessary delays. Manually applying patches with Run Command is operationally inefficient compared to using Automation documents.
- State Manager is not designed to dynamically handle instance deregistration and re-registration with load balancers. Automation documents like AWSEC2-PatchLoadBalancerInstance are more appropriate.
References:
Save time with our AWS cheat sheets.
A financial institution with many departments wants to migrate to the AWS Cloud from their data center. Each department should have their own established AWS accounts with preconfigured, Limited access to authorized services, based on each team’s needs, by the principle of least privilege.
What actions should be taken to ensure compliance with these security requirements?
- Use AWS CloudFormation to create new member accounts and networking and use IAM roles to allow access to approved AWS services.
- Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use the Landing Zone to create new member accounts and networking. Grant the department’s AWS power user permissions on the created accounts.
- Configure AWS Organizations with SCPs and create new member accounts. Use AWS CloudFormation templates to configure the member account networking.
- Deploy a Landing Zone within AWS Organizations. Allow department administrators to use the Landing Zone to create new member accounts and networking. Grant the department’s AWS power user permissions on the created accounts.
2. Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use the Landing Zone to create new member accounts and networking. Grant the department’s AWS power user permissions on the created accounts.
AWS Control Tower automates the setup of a new landing zone using best practices blueprints for identity, federated access, and account structure.
The account factory automates provisioning of new accounts in your organization. As a configurable account template, it helps you standardize the provisioning of new accounts with pre-approved account configurations. You can configure your account factory with pre-approved network configuration and region selections.
- Although you could perhaps make new AWS Accounts with AWS CloudFormation, the easiest way to do that is by using AWS Control Tower.
- You can make new accounts using AWS Organizations however the easiest way to do this is by using the AWS Control Tower service.
- Landing Zones do not get deployed within AWS Organizations.
Reference:
AWS Control Tower
As part of a company’s shift to the AWS cloud, they need to gain an insight into their total on-premises footprint. They have discovered that they are currently struggling with managing their software licenses. They would like to maintain a hybrid cloud setup, with some of their licenses stored in the cloud with some stored on-premises.
What actions should be taken to ensure they are managing the licenses appropriately going forward?
- Use AWS Secrets Manager to store the licenses as secrets to ensure they are stored securely
- Use the AWS Key Management Service to treat the license key safely and store it securely
- Use AWS License Manager to manage the software licenses
- Use Amazon S3 with governance lock to manage the storage of the licenses
3. Use AWS License Manager to manage the software licenses
AWS License Manager makes it easier to manage your software licenses from vendors such as Microsoft, SAP, Oracle, and IBM across AWS and on-premises environments. AWS License Manager lets administrators create customized licensing rules that mirror the terms of their licensing agreements.
- AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. This does not include license keys.
- AWS Key Management Service (AWS KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications, not license keys.
- Amazon S3 is not designed to store software licenses.
Reference:
AWS License Manager
A company has divested a single business unit and needs to move the AWS account owned by the business unit to another AWS Organization.
How can this be achieved?
- Create a new account in the destination AWS Organization and migrate resources
- Create a new account in the destination AWS Organization and share the original resources using AWS Resource Access Manager
- Migrate the account using AWS CloudFormation
- Migrate the account using the AWS Organizations console
4. Migrate the account using the AWS Organizations console
Accounts can be migrated between organizations. To do this you must have root or IAM access to both the member and master accounts. Resources will remain under the control of the migrated account.
- You do not need to create a new account in the destination AWS Organization as you can just migrate the existing account.
- You do not need to use AWS CloudFormation. You can use the Organizations API or AWS CLI for when there are many accounts to migrate and therefore you could use CloudFormation for any additional automation but it is not necessary for this scenario.
Reference:
How do I move an account from an existing organization to another organization in AWS Organizations?
Save time with our AWS cheat sheets.
A large company is currently using multiple AWS accounts as part of its cloud deployment model, and these accounts are currently structured using AWS Organizations. A Solutions Architect has been tasked with limiting access to an Amazon S3 bucket to only users of accounts that are enrolled with AWS Organizations. The Solutions Architect wants to avoid listing the many dozens of account IDs in the Bucket policy, as there are many accounts the frequent changes.
Which strategy meets these requirements with the LEAST amount of effort?
- Use Attribute Based Access Control by referencing Tags of accounts which are either enrolled as part of AWS Organizations, or not.
- Use the global key of AWS Organizations within a bucket policy using the aws:PrincipalOrgID key to allow access only to accounts which are part of the Organization.
- Use AWS Config and AWS Lambda functions to make remediations to the bucket policy as and when new accounts are created and tagged as not being part of AWS Organizations. Update the S3 bucket policy accordingly.
- Add all the non-organizational accounts to an Organizational Unit (OU) and attached a Service Control Policy (SCP) which denies access to the specific Amazon S3 bucket.
2. Use the global key of AWS Organizations within a bucket policy using the aws:PrincipalOrgID key to allow access only to accounts which are part of the Organization.
The aws:PrincipalOrgID global key provides a simpler alternative to manually listing and updating all the account IDs for all AWS accounts that exist within an Organization. The following Amazon S3 bucket policy allows members of any account in the ‘123456789’ organization to add an object into the ‘mydctbucket’ bucket.
- This could be a viable option, however maintaining an accurate tagging policy as opposed to referencing the PrincipalOrgID would much more difficult.
- Every time an account is added or removed from the organization this workflow would have to fire. This solution would need to be built and maintained, whereas it is much easier to refer to the PrincipalOrgID once and avoid needing to change the Bucket Policy.
- You can only use Organization Units (OUs) and Service Control Policies (SCPs) with accounts that are a part of AWS Organizations – meaning this solution could not work.
Reference:
AWS global condition context keys
Save time with our AWS cheat sheets.
A financial services company has a large, multi-Region footprint on AWS. A recent security audit highlighted some issues that must be addressed. The company must track all configuration changes affecting AWS resources and have detailed records of who has accessed the AWS environment. The data should include information such as which user has logged in and which API calls they made.
What actions should a Solutions Architect take to meet these requirements?
- Use Amazon CloudWatch to track configuration changes and AWS Config to record API calls and track access patterns in the AWS Cloud.
- Use AWS Config to track configuration changes and AWS CloudTrail to record API calls and track access patterns in the AWS Cloud.
- Use AWS Config to track configuration changes and Amazon EventBridge to record API calls and track access patterns in the AWS Cloud.
- Use Amazon Macie to track configuration changes and Amazon CloudTrail to record API calls and track access patterns in the AWS Cloud.
2. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls and track access patterns in the AWS Cloud.
AWS Config is a service used to track and remediation any unauthorized configuration changes made with your AWS Account. AWS Config could be used in this example with AWS AWS CloudTrail which keeps detailed logs of all API calls made within the account such as who logged in, which AWS Identity and Access Management (IAM) role is being used and also how they interact with the AWS Cloud.
- Amazon CloudWatch does not make track configuration changes, it tracks performance metrics and AWS Config does not track API calls, it tracks configuration changes.
- Although AWS Config would work in this scenario, Amazon EventBridge is a serverless event bus used to build event-driven- architectures so it cannot be used for tracking API calls.
- Amazon Macie is used with Amazon S3 to detect sensitive PII data, which has nothing to do with tracking configuration changes.
Reference:
AWS Config
Save time with our AWS cheat sheets.
A financial services company is currently using 500 Amazon EC2 instances to run batch-processing workloads to analyze financial information on a periodic basis. The organization needs to install a third-party tool on all these instances as quickly and as efficiently as possible and will have to carry out similar tasks on an ongoing basis going forward. The solution also needs to scale for the addition of future EC2 instances.
What should a solutions architect do to meet these requirements in the easiest way possible?
- Create an AWS Lambda Function which will make configuration changes to all the EC2 instances. Validate the tool has been installed using another Lambda function.
- Use AWS Systems Manager Patch Manager to install the tool on all the EC2 instances within a single patch.
- Use AWS Systems Manager Maintenance Windows to install the tool on all the EC2 instances within a set period of time.
- Use AWS Systems Manager Run Command to run a custom command that installs the tool on all the EC2 instances.
4. Use AWS Systems Manager Run Command to run a custom command that installs the tool on all the EC2 instances.
AWS Systems Manager Run command is designed to run commands across a large group of instances without having to SSH into all your instances and run the same command multiple times. You can easily run the same command to all the managed nodes as part of the workload, without having to maintain access keys or individual access for each instance.
- Whilst this may be possible, the code that would be required to create and test this solution would be difficult to design and would not scale effectively as AWS Systems Manager Run Command.
- AWS Systems Manager Patch Manager is designed to apply patches to EC2 instances and is not designed to run commands across a large group of instances.
- AWS Systems Manager Maintenance Windows is designed to select a defined window of time in which you EC2 instances will be patched and is not capable of running commands across multiple instances.
Reference:
AWS Systems Manager Run Command
Save time with our AWS cheat sheets:
A company stores its application logs in an Amazon CloudWatch Logs log group. A new policy requires the company to store all application logs in Amazon OpenSearch Service (Amazon Elasticsearch Service) in near-real time.
Which solution will meet this requirement with the LEAST operational overhead?
- Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
- Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
- Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery stream’s source. Configure Amazon OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream’s destination.
- Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams. Configure Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
1. Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
You can configure a CloudWatch Logs log group to stream data it receives to your Amazon OpenSearch Service cluster in near real-time through a CloudWatch Logs subscription. This is the solution that requires the least operational overhead. Subscription filters can also be created for Kinesis, Kinesis Data Firehose, and AWS Lambda.
- This is a possible solution but requires more operational overhead as it includes an additional service which must also be configured and managed.
- This would require more operational overhead as you must write and manage the code for the function yourself.
- Since the requirement is to dump the logs into OpenSearch and no further computation is needed, Firehose is a better candidate here.
Reference:
Streaming CloudWatch Logs data to Amazon OpenSearch Service
Save time with our AWS cheat sheets:
To trace a recent production incident a product manager needs to view logs in the Amazon CloudWatch logs. These logs are linked to events over the course of a week and may be needed in the future if incidents occur again. The product manager doesn’t have administrative access to the AWS account as it is managed by a third-party management company.
According to principal of least privilege, which option out of the below will fulfill the requirement to provide the necessary access for the product manager?
- Share the dashboard from the CloudWatch console. Enter the client’s email address and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.
- Create an IAM user specifically for the product manager. Attach the CloudWatchReadOnlyAccess AWS managed policy to the user. Share the new login credentials with the product manager. Share the browser URL of the correct dashboard with the product manager.
- Create an IAM user for the company’s employees. Attach the ViewOnly Access AWS managed policy to the IAM user. Share the new login credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in the Dashboards section.
- Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP credentials. On the bastion server, ensure that the browser is configured to open the dashboard URL with cached AWS credentials that have appropriate permissions to view the dashboard.
1. Share the dashboard from the CloudWatch console. Enter the client’s email address and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.
Below is the sequence for sharing the dashboard from Cloud watch console.
CloudWatch > Dashboard > Select your board > Share Dashboard>Share your dashboard and require a username and password>Enter mail address
You can share your CloudWatch dashboards with people who do not have direct access to your AWS account. This enables you to share dashboards across teams, with stakeholders, and with people external to your organization. You can even display dashboards on big screens in team areas or embed them in Wikis and other webpages.
- If the dashboard needs to be shared with additional users, this option increases manual effort every time and hence is not an optimal option.
- This option also involves lot of manual steps and as the recipients for the dashboard increase in number, manual effort increase and hence this is not an optimal option.
- Exposing bastion server isn’t required here for sharing the dashboard. Bastion servers are meant to be jump boxes to allow accesses to EC2 instances which isn’t the ask in the question hence this is also an incorrect option.
Reference:
Sharing CloudWatch dashboards
Save time with our AWS cheat sheets.
A digital marketing agency manages numerous client websites and apps on AWS. Each AWS resource is supposed to be tagged by the account for tracking and backup purposes. The agency wants to ensure that all AWS resources, including untagged ones, are backed up properly to minimize data loss risks.
Which solution will meet these requirements with the LEAST operational overhead?
- Use AWS Config to identify all untagged resources and tag them programmatically. Then, use AWS Backup to automate the backup of all AWS resources based on tags.
- Manually search for all untagged resources in each AWS service. Once identified, tag them appropriately and set up AWS Backup for each service separately.
- Rely on each account owner to identify their untagged resources and then use AWS Backup for backing up.
- Use AWS Lambda to periodically scan for untagged resources, add necessary tags, and then set up AWS Backup.
1. Use AWS Config to identify all untagged resources and tag them programmatically. Then, use AWS Backup to automate the backup of all AWS resources based on tags.
This solution is the most operationally efficient due to the powerful combination of AWS Config and AWS Backup.
AWS Config: This service enables you to assess, audit, and evaluate the configurations of your AWS resources. AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. You can use AWS Config to review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. In this scenario, AWS Config can be utilized to identify all resources that lack proper tags.
Tagging: Tags can be added to AWS resources programmatically. By tagging resources, you organize them into groups and subgroups, which can be based on purpose, owner, environment, or other criteria. In this context, tagging resources allows AWS Backup to identify and group resources that need to be backed up.
AWS Backup: AWS Backup is a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services. You can use AWS Backup to protect several AWS resource types, including Amazon EBS volumes, Amazon RDS databases, Amazon DynamoDB tables, Amazon EFS file systems, and AWS Storage Gateway volumes. It offers a centralized dashboard where you can manage all backups and allows you to automate and monitor backups across AWS services using policies.
With AWS Config identifying and tagging untagged resources, and AWS Backup automating the backup of tagged resources, this solution requires minimal operational overhead while ensuring all resources are adequately backed up.
- Searching for untagged resources manually in each service and setting up AWS Backup separately for each one would require a significant amount of operational overhead.
- Relying on individual account owners could result in inconsistency and increase the risk of missed resources or backups. Centralized backup management using AWS Backup is more efficient.
- Although AWS Lambda could be used to scan for untagged resources and add necessary tags, this would require developing and maintaining a custom script. AWS Config can handle this process with less operational overhead.
Reference:
Select AWS services to backup
Save time with our AWS cheat sheets.
A multinational enterprise plans to transition from numerous independent AWS accounts to a structured, multi-account AWS setup. The enterprise anticipates creating multiple AWS accounts to cater to various departments. The enterprise seeks to authenticate access to these AWS accounts using a centralized corporate directory service.
What combination of steps should a solutions architect suggest to meet these needs?
(Select TWO.)
- Create a new AWS Organizations entity with all features enabled. Create the new AWS accounts within the organization.
- Install and configure AWS Control Tower for centralized account management. Incorporate AWS Identity Center to manage identity.
- Deploy AWS Directory Service and integrate it with the corporate directory service. Set up AWS Identity Center for authentication across accounts.
- Set up an Amazon Cognito identity pool and configure AWS Identity Center to accept Amazon Cognito authentication.
- Establish an AWS Transit Gateway for centralized network management, linking AWS accounts.
1. Create a new AWS Organizations entity with all features enabled. Create the new AWS accounts within the organization.
3. Deploy AWS Directory Service and integrate it with the corporate directory service. Set up AWS Identity Center for authentication across accounts.
AWS Organizations provides policy-based management for multiple AWS accounts. With Organizations, you can create member accounts that are part of your organization and centrally manage your accounts.
AWS Directory Service allows you to connect your AWS resources with an existing on-premises Microsoft Active Directory or to set up a new, stand-alone directory in the AWS Cloud. AWS Identity Center makes it easy to centrally manage access to multiple AWS accounts and business applications and provide users with single sign-on access to all their assigned accounts and applications from one place.
- AWS Control Tower does have certain benefits, but it doesn’t directly cater to the company’s need for centralized corporate directory service integration. However, it could be used in conjunction with AWS Identity Center for user access management.
- Amazon Cognito is primarily used to add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. It isn’t typically used in multi-account management scenarios and isn’t directly relevant to the requirement for corporate directory service integration.
- AWS Transit Gateway connects VPCs and on-premises networks through a central hub. It is a network transit hub, not a user authentication and management service. It doesn’t directly address the need for centralized corporate directory service integration.
Reference:
Connect to a Microsoft AD directory
Save time with our AWS cheat sheets.
A software development firm uses AWS to run their compute instances across multiple accounts. These instances are individually billed. The company recently purchased an EC2 Reserved Instance (RI) for an ongoing project. However, due to the completion of that project, a significant number of EC2 instances were decommissioned. The company now wishes to utilize the benefits of their unused Reserved Instance across their other AWS accounts.
Which combination of steps should the company follow to achieve this?
(Select TWO.)
- Enable Reserved Instance sharing in the billing preferences section of the AWS Management Console for the account that purchased the existing RI.
- Establish an AWS Organization in the AWS account that purchased the RI and hosts the remaining active EC2 instances. Invite the other AWS accounts to join this organization from the management account.
- From the AWS Organizations management account, utilize AWS Resource Access Manager (AWS RAM) to share the Reserved Instance with other accounts.
- Enable Reserved Instance sharing in the billing preferences section of the AWS Management Console for the management account.
- Use AWS Organizations to establish a new payer account and invite the other accounts to join this organization.
1. Enable Reserved Instance sharing in the billing preferences section of the AWS Management Console for the account that purchased the existing RI.
2. Establish an AWS Organization in the AWS account that purchased the RI and hosts the remaining active EC2 instances. Invite the other AWS accounts to join this organization from the management account.
Just like the Savings Plans, the benefits of Reserved Instances can be applied across accounts if those accounts are part of the same AWS Organization and if sharing is enabled. This can be achieved by enabling Reserved Instance sharing in the AWS Management Console for the account that purchased the RI.
Setting up an AWS Organization from the account that purchased the Reserved Instance allows you to group your accounts. After the organization is set up, you can invite other accounts to join the organization, enabling you to share the benefits of the Reserved Instance across all accounts in the organization.
- AWS RAM does not apply to Reserved Instances. It is used to share other resources like Subnets, Transit Gateways, etc.
- Reserved Instance sharing needs to be enabled in the account that purchased the RI, not the management account.
- Creating a new payer account is not necessary. It would be more efficient to use the existing account that purchased the Reserved Instance.
Reference:
What is AWS Billing and Cost Management?
Save time with our AWS cheat sheets.
A company operates multiple AWS accounts under AWS Organizations. To better manage the costs, the company wants to allocate different budgets for each of these accounts. The company also wants to prevent additional resource provisioning in an AWS account if it reaches its allocated budget before the end of the budget period.
Which combination of solutions will meet these requirements?
(Select THREE.)
- Use AWS Budgets to establish different budgets for each AWS account. Configure the budgets in the Billing and Cost Management console.
- Use AWS Budgets in the AWS Management Console to set up budgets and specify the cost threshold for each AWS account.
- Set up an IAM role with the necessary permissions that allow AWS Budgets to execute budget actions.
- Create an IAM user with adequate permissions to allow AWS Budgets to enforce budget actions.
- Configure alerts in AWS Budgets to notify the company when an account is about to reach its budget threshold. Then use a budget action that links to the IAM role to prevent additional resource provisioning.
- Set up an alert in AWS Budgets to notify the company when a particular account meets its budget threshold. Enable real-time monitoring for immediate notification.
1. Use AWS Budgets to establish different budgets for each AWS account. Configure the budgets in the Billing and Cost Management console.
3. Set up an IAM role with the necessary permissions that allow AWS Budgets to execute budget actions.
5. Configure alerts in AWS Budgets to notify the company when an account is about to reach its budget threshold. Then use a budget action that links to the IAM role to prevent additional resource provisioning.
AWS Budgets is a tool that enables you to set custom cost and usage budgets. You can set your budget amount, and AWS provides you with estimated charges and forecasted costs for your AWS usage. Configuring the budgets in the Billing and Cost Management console is a recommended step.
AWS Budgets can execute budget actions (like preventing additional resource provisioning) using an IAM role with the necessary permissions.
Configuring alerts in AWS Budgets and linking a budget action to an IAM role for automatic prevention of additional resource provisioning is a correct and efficient way to manage costs.
- While AWS Budgets can indeed be set up in the AWS Management Console, the budgets aren’t set in the context of cost thresholds for each AWS account. This option is not fully accurate.
- Although you can create an IAM user with necessary permissions, using an IAM role is generally a better practice. An IAM user is an entity that you create in AWS to represent the person or service that uses it to interact with AWS, while an IAM role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. A role does not have long-term credentials associated with it like an IAM user does.
- AWS Budgets doesn’t allow for real-time monitoring; the data can be delayed up to 24 hours. The frequency of budget alert notifications is not customizable to the minute or hour; they are typically sent out daily, weekly, or when a certain threshold is crossed.
Reference:
Configuring budget actions
Save time with our AWS cheat sheets.
A company is looking for ways to incorporate its current AWS usage expenditure into its operational expense tracking dashboard. A solutions architect has been tasked with proposing a method that enables the company to fetch its current year’s cost data and project the costs for the forthcoming 12 months programmatically.
Which approach would fulfill these needs with the MINIMUM operational burden?
- Leverage the AWS Cost Explorer API to retrieve usage cost-related data, using pagination for larger data sets.
- Make use of downloadable AWS Cost Explorer report files in the .csv format to access usage cost-related data.
- Set up AWS Budgets actions to transmit usage cost data to the corporation via FTP.
- Generate AWS Budgets reports on usage cost data and dispatch the data to the corporation through SMTP
1. Leverage the AWS Cost Explorer API to retrieve usage cost-related data, using pagination for larger data sets.
AWS Cost Explorer API provides programmatic access to AWS cost and usage information. The user can query for aggregated data such as total monthly costs or total daily usage with this API.
Also, the Cost Explorer API supports pagination for managing larger data sets, making it efficient for larger queries.
- While AWS Cost Explorer does allow you to download .csv reports of your cost data, this method would not be programmatically accessible and would involve manual steps to download and process the data.
- AWS Budgets actions allow you to set custom cost and usage budgets that trigger actions (such as turning off EC2 instances) when the budget thresholds you set are breached. However, AWS Budgets does not support transmitting data via FTP.
- AWS Budgets does not support the dispatching of data through SMTP. AWS Budgets is primarily a tool for setting up alerts on your AWS costs or usage to control your costs, rather than a tool for exporting or transmitting cost data.
Reference:
AWS Cost Explorer
Save time with our AWS cheat sheets.
An international software firm provides its clients with custom solutions and tools designed for efficient data collection and analysis on AWS. The firm intends to centrally manage and distribute a standard set of solutions and tools for its clients’ self-service needs.
Which solution would best satisfy these requirements?
- Create AWS CloudFormation stacks for the clients.
- Create AWS Service Catalog portfolios for the clients.
- Create AWS Systems Manager documents for the clients.
- Create AWS Config rules for the clients.
2. Create AWS Service Catalog portfolios for the clients.
AWS Service Catalog enables organizations to create and manage catalogs of IT services that are approved for use on AWS. It allows centrally managed service portfolios, which clients can use on a self-service basis.
AWS Service Catalog provides a single location where organizations can centrally manage catalogs of IT services, which simplifies the organizational process and helps ensure compliance.
- While AWS CloudFormation is a powerful service for infrastructure as code (IaC), it doesn’t provide a straightforward way for clients to discover and use shared tools or solutions for self-service needs. It lacks the management features and access control mechanisms necessary for this scenario.
- AWS Systems Manager documents define the actions that Systems Manager performs on your managed instances. Although Systems Manager allows the central management of resources and applications, it doesn’t provide an effective means for clients to self-discover and use shared tools or solutions.
- AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It isn’t designed to centrally manage and distribute software tools or solutions.
Reference:
AWS Service Catalog
Save time with our AWS cheat sheets.