Practice Questions - Amazon AWS Certified SAP on AWS - Specialty PAS-C01 Flashcards
(130 cards)
A company is running SAP S/4HANA on AWS. The company has deployed its current database infrastructure on a u-12tb1.112xlarge Amazon EC2 instance that uses default tenancy and SUSE Linux Enterprise Server for SAP 15 SP1. The company must scale its SAP HANA database to an instance with more RAM. An SAP solutions architect needs to migrate the database to a u-18tb1.metal High Memory instance. How can the SAP solutions architect meet this requirement?
A. Use the AWS Management Console to stop the current instance. Change the instance type to u-18tb1.metal. Start the instance.
B. Use the AWS CLI to stop the current instance. Change the instance type to u-18tb1.metal. Start the instance.
C. Use the AWS CLI to stop the current instance. Create an AMI from the current instance. Use the new AMI to launch a new u-18tb1.metal instance with host tenancy.
D. Use the AWS Management Console to stop the current instance. Create an AMI from the current instance. Use the new AMI to launch a new u-18tb1.metal instance with dedicated tenancy.
C
The correct answer is C because u-18tb1.metal instances require host tenancy. Options A and B are incorrect because they attempt to change the instance type directly without creating a new AMI, which is not a supported method for this type of instance migration. Option D is incorrect because it specifies dedicated tenancy, whereas u-18tb1.metal instances require host tenancy. The creation of an AMI ensures a clean migration and avoids potential issues with in-place modifications.
A company has a 48 TB SAP application that runs on premises and uses an IBM Db2 database. The company needs to migrate the application to AWS. The company has strict uptime requirements for the application with maximum downtime of 24 hours each weekend. The company has established a 1 Gbps AWS Direct Connect connection but can spare bandwidth for migration only during non-business hours or weekends. How can the company meet these requirements to migrate the application to AWS?
A. Use SAP Software Provisioning Manager to create an export of the data. Move this export to AWS during a weekend by using the Direct Connect connection. On AWS, import the data into the target SAP application. Perform the cutover.
B. Set up database replication from on premises to AWS. On the day of downtime, ensure that the replication finishes. Perform cutover to AWS.
C. Use an AWS Snowball Edge Storage Optimized device to send an initial backup to AWS. Capture incremental backups daily. When the initial backup is on AWS, perform database restore from the initial backup and keep applying incremental backups. On the day of cutover, perform the final incremental backup. Perform cutover to AWS.
D. Use AWS Application Migration Service (CloudEndure Migration) to migrate the database to AWS. On the day of cutover, switch the application to run on AWS servers.
C
C is correct because it leverages AWS Snowball Edge for the initial large data transfer, minimizing downtime associated with network transfer. Incremental backups via Direct Connect during off-peak hours ensure minimal disruption. The 24-hour weekend window accommodates the final cutover.
A is incorrect because exporting and importing a 48TB database would likely exceed the 24-hour weekend downtime window.
B is incorrect because continuous replication requires sustained bandwidth and may not complete within the 24-hour window, risking downtime beyond the allowed limit. Also, a full cutover will still necessitate a significant downtime window for switching.
D is incorrect because it doesn’t address the large dataset size efficiently within the constrained bandwidth and downtime allowances. CloudEndure may not be suitable for such a large database migration within the given timeframe and bandwidth limitations.
A company has run SAP HANA on AWS for a few years on an Amazon EC2 X1 instance with dedicated tenancy. Because of business growth, the company plans to migrate to an EC2 High Memory instance by using a resize operation. The SAP HANA system is set up for high availability with SAP HANA system replication and clustering software. Which combination of steps should the company take before the migration? (Choose three.)
A. Ensure that the source system is running on a supported operating system version.
B. Update all references to the IP address of the source system, including the /etc/hosts file for the operating system and DNS entries, to reflect the new IP address.
C. Adjust the storage size of SAP HANA data, log, shared, and backup volumes.
D. Resize the instance through the AWS Management Console or the AWS CLI.
E. Ensure that there is a backup of the source system.
F. Update the DNS records. Check the connectivity between the SAP application servers and the new SAP HANA instance.
A, D, E
A company is migrating a 20 TB SAP S/4HANA system to AWS. The company wants continuous monitoring of the SAP S/4HANA system and wants to receive notification when CPU utilization is greater than 90%. An SAP solutions architect must implement a solution that provides this notification with the least possible effort. Which solution meets these requirements?
A. Create an AWS Lambda function that checks CPU utilization and sends the notification.
B. Use AWS CloudTrail to check the CPU utilization metric. Set up an Amazon Simple Notification Service (Amazon SNS) topic to send the notification.
C. Use Amazon CloudWatch to set a CPU utilization alarm. Set up an Amazon Simple Notification Service (Amazon SNS) topic to send the notification.
D. Use the Amazon CloudWatch dashboard to monitor CPU utilization. Set up an Amazon Simple Notification Service (Amazon SNS) topic to send the notification.
C
The correct answer is C because Amazon CloudWatch is specifically designed for monitoring AWS resources and allows setting up alarms based on various metrics, including CPU utilization. It directly integrates with Amazon SNS for notifications, providing a straightforward and efficient solution.
Option A requires creating a custom Lambda function, which involves more development effort. Option B is incorrect because CloudTrail is primarily for logging API calls, not real-time monitoring of metrics. Option D only provides monitoring; it lacks the automated alerting mechanism required by the question.
A company runs core business processes on SAP. The company plans to migrate its SAP workloads to AWS. Which combination of prerequisite steps must the company take to receive integrated support for SAP on AWS? (Choose three.)
A. Purchase an AWS Developer Support plan or an AWS Enterprise Support plan.
B. Purchase an AWS Business Support plan or an AWS Enterprise Support plan.
C. Enable Amazon CloudWatch detailed monitoring.
D. Enable Amazon EC2 termination protection.
E. Configure and run the AWS Data Provider for SAP agent.
F. Use Reserved Instances for all Amazon EC2 instances that run SAP.
BCE
A company wants to deploy SAP BW/4HANA on AWS. An SAP technical architect selects a u-6tb1.56xlarge Amazon EC2 instance to host the SAP HANA database. The SAP technical architect must design a highly available architecture that achieves the lowest possible RTO and a near-zero RPO. The solution must not affect the performance of the primary database. Which solution will meet these requirements?
A. Deploy two u-6tb1.56xlarge EC2 instances for SAP HANA in separate AWS Regions. Set up synchronous SAP HANA system replication between the instances.
B. Deploy two u-6tb1.56xlarge EC2 instances for SAP HANA in separate AWS Regions. Set up asynchronous SAP HANA system replication between the instances.
C. Deploy two u-6tb1.56xlarge EC2 instances for SAP HANA in separate Availability Zones in the same AWS Region. Set up synchronous SAP HANA system replication between the instances.
D. Deploy two u-6tb1.56xlarge EC2 instances for SAP HANA in separate Availability Zones in the same AWS Region. Set up asynchronous SAP HANA system replication between the instances.
C
The correct answer is C because: To achieve near-zero RPO (Recovery Point Objective) and the lowest possible RTO (Recovery Time Objective), synchronous replication is necessary. Synchronous replication ensures data is written to both the primary and secondary databases simultaneously. Deploying the instances in separate Availability Zones within the same AWS Region provides high availability and fault tolerance without the latency issues associated with using separate regions. Options A and B are incorrect because using separate regions introduces significant latency, impacting performance and negating the goal of minimal impact to the primary database. Option D is incorrect because asynchronous replication does not guarantee near-zero RPO.
A company has migrated its SAP workloads to AWS. A third-party team performs a technical evaluation and finds that the SAP workloads are not fully supported by SAP and AWS. What should the company do to receive full support from SAP and AWS?
A. Purchase an AWS Developer Support plan.
B. Turn on Amazon CloudWatch basic monitoring.
C. Ensure that the /usr/sap file system is running on local instance storage.
D. Ensure that the AWS Data Provider for SAP agent is configured and running.
D
A company has deployed SAP workloads on AWS. The company’s SAP applications use an IBM Db2 database and an SAP HANA database. An SAP solutions architect needs to create a solution to back up the company’s databases. Which solution will meet these requirements MOST cost-effectively?
A. Use an Amazon Elastic Block Store (Amazon EBS) volume to store backups for the databases. Run a periodic script to move the backups to Amazon S3 and to delete the backups from the EBS volume.
B. Use AWS Backint Agent for SAP HANA to move the backups for the databases directly to Amazon S3.
C. Use an Amazon Elastic Block Store (Amazon EBS) volume to store backups for the Db2 database. Run periodic scripts to move the backups to Amazon S3 and to delete the backups from the EBS volume. For the SAP HANA database, use AWS Backint Agent for SAP HANA to move the backups directly to Amazon S3.
D. Use Amazon Elastic File System (Amazon EFS) to store backups for the databases.
C
C is the most cost-effective solution because it leverages the specialized and optimized AWS Backint Agent for SAP HANA backups to S3, which is efficient and designed for this purpose. For the Db2 database, which is not supported by the Backint Agent, it uses EBS for initial storage followed by transfer to the cheaper S3 storage. Options A and B are incorrect because they don’t address both database types optimally. Option A is inefficient and costly due to using EBS for long-term storage of both databases. Option B is incorrect because it only addresses the SAP HANA database and ignores the Db2 database. Option D is incorrect because Amazon EFS is not designed for long-term, cost-effective backup storage. Amazon S3 is significantly more cost effective for archiving backups.
A company is running its SAP S/4HANA system on AWS. The company needs to retain database backups for the previous 30 days. The company is taking full online backups by using SAP HANA Studio and is storing the backup files on General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volumes. The company needs to reduce the cost of this storage. What should the company do to achieve the LOWEST cost for the backup storage?
A. Continue to use SAP HANA Studio to back up the SAP HANA database to gp3 EBS volumes. After each backup is completed, use Linux shell scripts to move the backup to Amazon S3. Set up an S3 Lifecycle configuration to delete the backups that are older than 30 days.
B. Continue to use SAP HANA Studio to back up the SAP HANA database. Use Throughput Optimized HDD (st1) EBS volumes to store each backup. After each backup is completed, use Linux shell scripts to move the backup to Amazon S3. Set up an S3 Lifecycle configuration to delete the backups that are older than 30 days.
C. Use AWS Backup to take full online backups of the SAP HANA database.
D. Continue to use SAP HANA Studio to back up the SAP HANA database. Use AWS Backint Agent for SAP HANA to store each backup. Set up an Amazon S3 Lifecycle configuration to delete the backups that are older than 30 days.
D
The most cost-effective solution is D because it leverages the AWS Backint Agent for SAP HANA to directly store backups in Amazon S3. Amazon S3 is significantly cheaper than EBS volumes for storing large amounts of data like backups. Options A and B involve storing backups on EBS first, then transferring to S3, incurring extra costs for EBS storage and data transfer. Option C, while using AWS services, does not specifically address the requirement of storing backups for only 30 days and doesn’t directly state that it utilizes S3. Option B is also incorrect as it suggests using st1 (Throughput Optimized HDD) volumes which, while cheaper than gp3, are still more expensive than S3 for this use case.
A company is planning to implement its production SAP HANA database with an XS Advanced runtime environment on AWS. The company must provision the necessary AWS resources and install the SAP HANA database within 1 day to meet an urgent business request. The company must implement a solution that minimizes operational effort. Which combination of steps should the company take to meet these requirements? (Choose two.)
A. Install XS Advanced runtime by using the SAP HANA database lifecycle manager (HDBLCM).
B. Provision AWS resources by using the AWS Management Console. Install SAP HANA by using the SAP HANA database lifecycle manager (HDBLCM).
C. Use AWS Launch Wizard for SAP.
D. Develop and use AWS CloudFormation templates to provision the AWS resources.
E. Evaluate and identify the certified Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volume types for SAP HANA.
AC
A company has implemented its ERP system on SAP S/4HANA on AWS using Enqueue Standalone Architecture (ENSA2). The system is highly available and utilizes a failover to secondary nodes in a second Availability Zone. During a planned failover test, the initial SAP licenses became invalid. What is the most likely reason for this license invalidation?
A. SAP licenses require manual reapplication after each failover.
B. The cluster configuration is incorrectly set up.
C. Two separate sets of SAP licenses are required for the ASCS instances in each Availability Zone.
D. The secondary node was stopped and restarted during recent maintenance.
C
A company recently implemented its SAP S/4HANA system on AWS. An SAP engineer must set up a Pacemaker cluster on Amazon EC2 instances to provide high availability. Which solution will meet this requirement?
A. Set up a fencing mechanism for the cluster by using a block device.
B. Set up an overlay IP address as a public IP address.
C. Create a route to the overlay IP address on the on-premises network.
D. Create an EC2 instance profile that has an IAM role that allows access modification of the route table.
D
The correct answer is D because managing the route table is crucial for high availability in a Pacemaker cluster. The IAM role allows the Pacemaker cluster to dynamically adjust routing as needed to switch between active and passive nodes, ensuring continuous operation in case of a failure.
Option A is incorrect because while a fencing mechanism is important for high availability, using a block device isn’t the only or necessarily the best method. Other fencing mechanisms exist and are often preferred.
Option B is incorrect because using a public IP address for the overlay IP is not ideal for security and is not directly related to managing the cluster’s high availability in the context of EC2 instances.
Option C is incorrect because routing to the overlay IP on the on-premises network is not relevant to the high availability setup within the AWS EC2 environment. The focus should be on internal routing within the AWS VPC.
A company is planning to migrate its SAP S/4HANA and SAP BW/4HANA workloads to AWS. The company is currently using a third-party solution to back up its SAP HANA database and application and wants to retire this solution after the migration. They need an AWS-based backup solution for their SAP HANA database and application that provides secure storage and cost optimization. Which solution best meets these requirements?
A. Use SAP HANA Studio, SAP HANA HDBSQL, and SAP HANA Cockpit to perform backups to local Amazon Elastic Block Store (Amazon EBS) volumes. Enable EBS volume encryption. Use AWS Backup to perform application backups with AMIs or snapshots to Amazon S3. Enable S3 encryption.
B. Use SAP HANA Cockpit to implement a backup policy and perform SAP HANA database backups to Amazon S3 with AWS Backint Agent for SAP HANA. Enable S3 encryption. Use AWS Backup with backup plans to perform application backups with AMIs or snapshots. Enable S3 encryption.
C. Use AWS Backup with backup plans to perform SAP HANA database backups to Amazon S3 with AWS Backint Agent for SAP HANA. Enable S3 encryption. Use AWS Backup with backup plans to perform application backups with AMIs or snapshots. Enable S3 encryption.
D. Use SAP HANA Studio, SAP HANA HDBSQL, and SAP HANA Cockpit to perform backups to local Amazon Elastic Block Store (Amazon EBS) volumes. Copy the backups to Amazon S3. Use AWS Backup to schedule application backups with AMIs or snapshots to Amazon S3.
B
The best solution is B because it leverages the AWS Backint Agent for SAP HANA for database backups directly to S3, which is generally more efficient and cost-effective than options involving EBS and manual copying (A and D). Option C uses AWS Backup for both database and application backups, which, while simpler, might not be as cost-optimized as using the specialized Backint Agent for the database. Option A is inefficient and less secure as it involves manual copying between storage tiers. Option D similarly involves manual steps and thus introduces more room for errors and increased operational cost. Option B provides a balance of using specialized tools where applicable (Backint Agent for the database) while still utilizing AWS Backup for streamlined application backup management.
A company is migrating its SAP environments (SAP ECC, SAP BW, SAP PI) to AWS and transforming to SAP S/4HANA, implementing SAP Fiori with an SAP Gateway hub and an internet-facing SAP Web Dispatcher. Employees worldwide will access the SAP Fiori launchpad. To allow access only to necessary URLs for SAP Fiori, how should an SAP security engineer design the security architecture?
A. Deploy the SAP Web Dispatcher in a public subnet. Allow access to only the IP addresses that employees use to access the SAP Fiori server.
B. Deploy the SAP Web Dispatcher in a private subnet. Allow access to only the ports that are required for running SAP Fiori.
C. Deploy the SAP Web Dispatcher in a public subnet. Allow access to only the paths that are required for running SAP Fiori.
D. Deploy the SAP Web Dispatcher in a private subnet. Allow access to only the SAP S/4HANA system that serves as the SAP Fiori backend system for the SAP Gateway hub.
C
The correct answer is C because it addresses both the need for public accessibility (employees worldwide) and the security requirement (limiting access to only necessary URLs). The SAP Web Dispatcher, acting as the entry point for HTTP(S) requests, needs to be publicly accessible. However, restricting access to only the required paths significantly enhances security by preventing unauthorized access to other system components or functionalities.
Option A is incorrect because managing and controlling access based on individual employee IP addresses globally is impractical and unmanageable. Option B is insufficient because limiting access to ports alone doesn’t restrict access to specific URLs; an attacker could still potentially access unauthorized URLs through allowed ports. Option D is incorrect because it doesn’t address the public accessibility requirement; the Web Dispatcher must be in a public subnet to handle requests from global employees.
A global company is planning to migrate its SAP S/4HANA workloads and SAP BW/4HANA workloads to AWS. The company’s database will not grow more than 3 TB for the next 3 years. The company’s production SAP HANA system has been designed for high availability (HA) and disaster recovery (DR) with the following configurations:
• HA: SAP HANA system replication configured with SYNC mode and LOGREPLAY operation mode across two Availability Zones with the same size SAP HANA node
• DR: SAP HANA system replication configured with ASYNC mode and LOGREPLAY operation mode across AWS Regions with the same size SAP HANA node
All the SAP HANA nodes in the current configuration are the same size. For HA, the company wants an RPO of 0 and an RTO of 5 minutes. For DR, the company wants an RPO of 0 and an RTO of 3 hours.
How should the company design this solution to meet the RPO and RTO requirements MOST cost-effectively?
A. Maintain HA with SAP HANA system replication configured with SYNC mode and table preload turned on across two Availability Zones. In each Availability Zone, use the same size SAP HANA node. Decrease the size of the DR node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with ASYNC mode and table preload turned on. Increase the size of the DR node during a DR event.
B. Maintain HA with SAP HANA system replication configured with SYNC mode and table preload turned on across two Availability Zones. In each Availability Zone, use the same size SAP HANA node. Decrease the size of the DR node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with ASYNC mode and table preload turned off. Increase the size of the DR node during a DR event.
C. Maintain HA with SAP HANA system replication across two Availability Zones. Decrease the size of the HA secondary node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with SYNC mode and table preload turned on. Increase the size of the HA secondary node during an HA event. Decrease the size of the DR node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with table preload turned on. Increase the size of the DR node during a DR event.
D. Maintain HA with SAP HANA system replication across two Availability Zones. Decrease the size of the HA secondary node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with SYNC mode and table preload turned on. Increase the size of the HA secondary node during an HA event. Decrease the size of the DR node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with table preload turned off. Increase the size of the DR node during a DR event.
B
The best option is B because it achieves the RPO and RTO requirements while optimizing costs. Option B maintains synchronous replication for HA (RPO=0, RTO~5min) and asynchronous replication for DR (RPO>0, RTO~3hrs), which aligns with the requirements. Crucially, it avoids the unnecessary cost of table preload for the DR system. Table preload significantly increases resource usage, especially for a DR system that is only used in failure scenarios. Options A and C include table preload for DR, unnecessarily increasing costs. Options C and D propose reducing the HA secondary node size, which would compromise the HA capabilities and is not cost-effective. Option D also includes table preload for HA which is less optimal than option B because it is not cost-effective for HA as well.
A company uses SAP S/4HANA as its ERP solution. The company is using AWS Backint Agent for SAP HANA (AWS Backint agent) for backups. Although the configuration is correct for AWS Backint agent, the backups are failing with the following error: NoCredentialProviders: no valid providers in chain
. What could be the reason for this error?
A. AWS Systems Manager Agent is not installed on the Amazon EC2 instance.
B. No IAM role is attached to the Amazon EC2 instance.
C. AWS Backint agent binaries are owned by a non-root user.
D. AWS Backint agent is connecting to Amazon S3 with VPC endpoints.
B. No IAM role is attached to the Amazon EC2 instance.
The error “NoCredentialProviders: no valid providers in chain” indicates that the AWS Backint Agent cannot find the necessary credentials to authenticate with AWS services, specifically Amazon S3 where the backups are stored. An IAM role attached to the EC2 instance provides these credentials. Without the IAM role, the agent lacks the authorization to access S3 and thus fails.
Option A is incorrect because the Systems Manager Agent is not directly involved in the authentication process for the Backint Agent’s access to S3. Option C is incorrect because while incorrect ownership of binaries could cause other problems, it’s not the direct cause of this specific credential error. Option D is incorrect because using VPC endpoints for S3 access is a security best practice and doesn’t inherently cause authentication failures; the IAM role is still required for authorization.
An SAP engineer is designing an SAP S/4HANA high availability architecture on Linux Amazon EC2 instances in two Availability Zones. The SAP engineer needs to create a solution to achieve high availability and consistency for /usr/sap/trans
and /usr/sap/
file systems. Which solution will meet these requirements with the MOST reliability?
A. Set up an NFS server on one of the EC2 instances.
B. Use Amazon Elastic File System (Amazon EFS).
C. Use the EC2 local instance store.
D. Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach.
B
The correct answer is B because Amazon EFS provides a scalable and highly available network file system that can be shared across multiple EC2 instances in different Availability Zones. This ensures high availability and consistency for the /usr/sap/trans
and /usr/sap/
file systems.
Option A (NFS server on one EC2 instance) creates a single point of failure; if that instance fails, the file system becomes unavailable. Option C (EC2 local instance store) is not shared across instances and is only available to the instance it resides on, making it unsuitable for high availability. Option D (Amazon EBS Multi-Attach) is designed for block storage, not for the shared file system requirements of /usr/sap/trans
and /usr/sap/
.
A company needs to migrate its SAP HANA landscape from an on-premises data center to AWS. The company’s existing SAP HANA database instance is oversized. The company must resize the database instance as part of the migration. Which combination of steps should the company take to ensure that the target Amazon EC2 instance is sized optimally for the SAP HANA database instance? (Choose two.)
A. Determine the peak memory utilization of the existing on-premises SAP HANA system.
B. Determine the average memory utilization of the existing on-premises SAP HANA system.
C. For the target system, select any SAP certified EC2 instance that provides more memory than the current average memory utilization.
D. For the target system, select the smallest SAP certified EC2 instance that provides more memory than the current peak memory utilization.
E. For the target system, select any current-generation EC2 memory-optimized instance.
A, D
A and D are correct because optimal sizing for an SAP HANA instance requires considering peak memory utilization to avoid performance issues during high-demand periods. Option D further refines this by recommending the smallest instance meeting peak requirements for cost-effectiveness.
B is incorrect because basing the size on average memory utilization risks insufficient resources during peak demand. C is incorrect because it doesn’t account for peak memory usage. E is incorrect because while using a memory-optimized instance is generally a good practice, it doesn’t guarantee optimal sizing without considering peak memory requirements.
A company plans to migrate a critical SAP S/4HANA workload from on-premises hardware to AWS. An SAP solutions architect needs to develop a solution to effectively monitor the SAP landscape on AWS for this workload. The solution must capture resource utilization and must follow a serverless approach to monitor the SAP environment. The solution also must track all the API calls that are made within the company’s AWS account. Which combination of steps should the SAP solutions architect take to meet these requirements? (Choose two.)
A. Configure Amazon CloudWatch detailed monitoring for the AWS resources in the SAP landscape. Use AWS Lambda, and create the Lambda layer “sapjco” for the SAP Java Connector. Deploy the solution with AWS Serverless Application Repository for sap-monitor.
B. Set up a Multi-AZ deployment of SAP on AWS. Use Amazon EC2 Auto Scaling to add or remove EC2 instances automatically based on the CPU utilization of the SAP instance.
C. Use AWS CloudTrail to log and retain account activity related to actions across the SAP on AWS infrastructure.
D. Use the AWS Personal Health Dashboard to get a personalized view of performance and availability of the underlying AWS resources.
E. Use AWS Trusted Advisor to optimize the AWS infrastructure and to improve security and performance.
A, C
A company is planning to deploy SAP HANA on AWS. The block storage that hosts the SAP HANA data volume must have at least 64,000 IOPS per volume and must have a maximum throughput of at least 500 MiB/s per volume. Which Amazon Elastic Block Store (Amazon EBS) volume meets these requirements?
A. General Purpose SSD (gp2) EBS volume
B. General Purpose SSD (gp3) EBS volume
C. Provisioned IOPS SSD (io2) EBS volume
D. Throughput Optimized HDD (st1) EBS volume
C
An SAP engineer is configuring AWS Backint Agent for SAP HANA (AWS Backint agent) for an SAP HANA database running on an Amazon EC2 instance. After configuration, backups fail. The AWS Backint agent logs contain numerous “AccessDenied” messages. Which two actions should the SAP engineer take to resolve this issue?
A. Update the EC2 role permissions to allow S3 bucket access.
B. Verify that the configuration file has the correct formatting of the S3BucketOwnerAccountID.
C. Install AWS Systems Manager Agent (SSM Agent) correctly by using the sudo command.
D. Install the correct version of Python for AWS Backint agent.
E. Add the execute permission to the AWS Backint agent binary.
A, B
A company is running its production SAP HANA system on an Amazon EC2 instance running SUSE Linux Enterprise Server 12. The operating system patch version is out of date, and SAP has identified critical security vulnerabilities. SUSE has published a critical patch update requiring a system restart. The company must apply this patch and many prerequisites. Which solution minimizes system downtime?
A. Use the SUSE Linux Enterprise Server patching update process and SUSE tools to apply the required patches to the existing instance.
B. Use AWS Systems Manager Patch Manager to automatically apply the patches to the existing instance.
C. Use AWS Launch Wizard for SAP to provision a second SAP HANA instance with an AMI that contains the required patches. Use SAP HANA system replication to copy the data from the original SAP HANA instance to the newly launched SAP HANA instance. Perform SAP HANA system replication takeover.
D. Use AWS Launch Wizard for SAP to provision a second SAP HANA instance with an AMI that contains the required patches. Use SAP HANA native backup and restore to copy the data from the original SAP HANA instance to the newly launched SAP HANA instance.
C
The correct answer is C because it leverages SAP HANA system replication for near-zero downtime. Creating a new instance with the patches already applied, and then replicating the data using system replication, allows for a quick switchover with minimal interruption.
Option A and B will require a system restart, resulting in significant downtime. Option D, while using a new patched instance, uses backup and restore which is a much slower process than system replication, leading to more downtime.
A company recently implemented an architecture in which all the systems and components of the company’s SAP environment are hosted on AWS. Front-end users connect from the corporate data center. SAP application servers and database servers are hosted in a private subnet. The company has the following requirements:
• Ensure that the instances in the private subnet can connect to the internet and other AWS services.
• Prevent instances from receiving inbound traffic that is initiated by someone on the internet.
• For SAP support, allow a remote connection between the company’s network and SAP. Ensure that access is available to the production environment as needed.
Which solution will meet these requirements?
A. Use a NAT gateway to ensure connectivity between the instances in the private subnet and other AWS services. Deploy SAProuter in a public subnet. Assign a public IP address that is reachable from the internet.
B. Use NAT instances to ensure connectivity between the instances in the private subnet and other AWS services. Deploy SAProuter in the private subnet with an Elastic IP address that is reachable from the internet.
C. Use a bastion host to ensure connectivity between the instances in the private subnet and other AWS services. Set up an AWS Direct Connect connection between the SAP support network and the AWS Region where the architecture is implemented.
D. Use an internet gateway to ensure connectivity between the instances in the private subnet and other AWS services. Deploy SAProuter in a public subnet. Assign a public IP address that is reachable from the internet.
A
A company plans to migrate its SAP systems from on-premises to AWS to reduce infrastructure costs and is willing to commit for 3 years. They require maximum flexibility in selecting Amazon EC2 instances across AWS Regions, instance families, and instance sizes. Which purchasing option offers the lowest cost while meeting these requirements?
A. Spot Instances
B. 3-year Compute Savings Plan
C. 3-year EC2 Instance Savings Plan
D. 3-year Reserved Instances
B
The correct answer is B, the 3-year Compute Savings Plan. The discussion highlights that Compute Savings Plans offer flexibility to change instance families, sizes, operating systems, and AWS regions without losing cost savings, perfectly aligning with the company’s needs. Spot Instances (A) are unsuitable for production SAP systems due to their unpredictable nature. EC2 Instance Savings Plans (C) and Reserved Instances (D) while offering discounts, lack the flexibility to change instance types and regions as readily as Compute Savings Plans. The cited AWS documentation further supports this conclusion.