Practice Questions - Amazon AWS Certified SAP on AWS - Specialty PAS-C01 Flashcards

(130 cards)

1
Q

A company is running SAP S/4HANA on AWS. The company has deployed its current database infrastructure on a u-12tb1.112xlarge Amazon EC2 instance that uses default tenancy and SUSE Linux Enterprise Server for SAP 15 SP1. The company must scale its SAP HANA database to an instance with more RAM. An SAP solutions architect needs to migrate the database to a u-18tb1.metal High Memory instance. How can the SAP solutions architect meet this requirement?

A. Use the AWS Management Console to stop the current instance. Change the instance type to u-18tb1.metal. Start the instance.
B. Use the AWS CLI to stop the current instance. Change the instance type to u-18tb1.metal. Start the instance.
C. Use the AWS CLI to stop the current instance. Create an AMI from the current instance. Use the new AMI to launch a new u-18tb1.metal instance with host tenancy.
D. Use the AWS Management Console to stop the current instance. Create an AMI from the current instance. Use the new AMI to launch a new u-18tb1.metal instance with dedicated tenancy.

A

C

The correct answer is C because u-18tb1.metal instances require host tenancy. Options A and B are incorrect because they attempt to change the instance type directly without creating a new AMI, which is not a supported method for this type of instance migration. Option D is incorrect because it specifies dedicated tenancy, whereas u-18tb1.metal instances require host tenancy. The creation of an AMI ensures a clean migration and avoids potential issues with in-place modifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A company has a 48 TB SAP application that runs on premises and uses an IBM Db2 database. The company needs to migrate the application to AWS. The company has strict uptime requirements for the application with maximum downtime of 24 hours each weekend. The company has established a 1 Gbps AWS Direct Connect connection but can spare bandwidth for migration only during non-business hours or weekends. How can the company meet these requirements to migrate the application to AWS?
A. Use SAP Software Provisioning Manager to create an export of the data. Move this export to AWS during a weekend by using the Direct Connect connection. On AWS, import the data into the target SAP application. Perform the cutover.
B. Set up database replication from on premises to AWS. On the day of downtime, ensure that the replication finishes. Perform cutover to AWS.
C. Use an AWS Snowball Edge Storage Optimized device to send an initial backup to AWS. Capture incremental backups daily. When the initial backup is on AWS, perform database restore from the initial backup and keep applying incremental backups. On the day of cutover, perform the final incremental backup. Perform cutover to AWS.
D. Use AWS Application Migration Service (CloudEndure Migration) to migrate the database to AWS. On the day of cutover, switch the application to run on AWS servers.

A

C

C is correct because it leverages AWS Snowball Edge for the initial large data transfer, minimizing downtime associated with network transfer. Incremental backups via Direct Connect during off-peak hours ensure minimal disruption. The 24-hour weekend window accommodates the final cutover.

A is incorrect because exporting and importing a 48TB database would likely exceed the 24-hour weekend downtime window.

B is incorrect because continuous replication requires sustained bandwidth and may not complete within the 24-hour window, risking downtime beyond the allowed limit. Also, a full cutover will still necessitate a significant downtime window for switching.

D is incorrect because it doesn’t address the large dataset size efficiently within the constrained bandwidth and downtime allowances. CloudEndure may not be suitable for such a large database migration within the given timeframe and bandwidth limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A company has run SAP HANA on AWS for a few years on an Amazon EC2 X1 instance with dedicated tenancy. Because of business growth, the company plans to migrate to an EC2 High Memory instance by using a resize operation. The SAP HANA system is set up for high availability with SAP HANA system replication and clustering software. Which combination of steps should the company take before the migration? (Choose three.)
A. Ensure that the source system is running on a supported operating system version.
B. Update all references to the IP address of the source system, including the /etc/hosts file for the operating system and DNS entries, to reflect the new IP address.
C. Adjust the storage size of SAP HANA data, log, shared, and backup volumes.
D. Resize the instance through the AWS Management Console or the AWS CLI.
E. Ensure that there is a backup of the source system.
F. Update the DNS records. Check the connectivity between the SAP application servers and the new SAP HANA instance.

A

A, D, E

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company is migrating a 20 TB SAP S/4HANA system to AWS. The company wants continuous monitoring of the SAP S/4HANA system and wants to receive notification when CPU utilization is greater than 90%. An SAP solutions architect must implement a solution that provides this notification with the least possible effort. Which solution meets these requirements?
A. Create an AWS Lambda function that checks CPU utilization and sends the notification.
B. Use AWS CloudTrail to check the CPU utilization metric. Set up an Amazon Simple Notification Service (Amazon SNS) topic to send the notification.
C. Use Amazon CloudWatch to set a CPU utilization alarm. Set up an Amazon Simple Notification Service (Amazon SNS) topic to send the notification.
D. Use the Amazon CloudWatch dashboard to monitor CPU utilization. Set up an Amazon Simple Notification Service (Amazon SNS) topic to send the notification.

A

C

The correct answer is C because Amazon CloudWatch is specifically designed for monitoring AWS resources and allows setting up alarms based on various metrics, including CPU utilization. It directly integrates with Amazon SNS for notifications, providing a straightforward and efficient solution.

Option A requires creating a custom Lambda function, which involves more development effort. Option B is incorrect because CloudTrail is primarily for logging API calls, not real-time monitoring of metrics. Option D only provides monitoring; it lacks the automated alerting mechanism required by the question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A company runs core business processes on SAP. The company plans to migrate its SAP workloads to AWS. Which combination of prerequisite steps must the company take to receive integrated support for SAP on AWS? (Choose three.)
A. Purchase an AWS Developer Support plan or an AWS Enterprise Support plan.
B. Purchase an AWS Business Support plan or an AWS Enterprise Support plan.
C. Enable Amazon CloudWatch detailed monitoring.
D. Enable Amazon EC2 termination protection.
E. Configure and run the AWS Data Provider for SAP agent.
F. Use Reserved Instances for all Amazon EC2 instances that run SAP.

A

BCE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company wants to deploy SAP BW/4HANA on AWS. An SAP technical architect selects a u-6tb1.56xlarge Amazon EC2 instance to host the SAP HANA database. The SAP technical architect must design a highly available architecture that achieves the lowest possible RTO and a near-zero RPO. The solution must not affect the performance of the primary database. Which solution will meet these requirements?
A. Deploy two u-6tb1.56xlarge EC2 instances for SAP HANA in separate AWS Regions. Set up synchronous SAP HANA system replication between the instances.
B. Deploy two u-6tb1.56xlarge EC2 instances for SAP HANA in separate AWS Regions. Set up asynchronous SAP HANA system replication between the instances.
C. Deploy two u-6tb1.56xlarge EC2 instances for SAP HANA in separate Availability Zones in the same AWS Region. Set up synchronous SAP HANA system replication between the instances.
D. Deploy two u-6tb1.56xlarge EC2 instances for SAP HANA in separate Availability Zones in the same AWS Region. Set up asynchronous SAP HANA system replication between the instances.

A

C

The correct answer is C because: To achieve near-zero RPO (Recovery Point Objective) and the lowest possible RTO (Recovery Time Objective), synchronous replication is necessary. Synchronous replication ensures data is written to both the primary and secondary databases simultaneously. Deploying the instances in separate Availability Zones within the same AWS Region provides high availability and fault tolerance without the latency issues associated with using separate regions. Options A and B are incorrect because using separate regions introduces significant latency, impacting performance and negating the goal of minimal impact to the primary database. Option D is incorrect because asynchronous replication does not guarantee near-zero RPO.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company has migrated its SAP workloads to AWS. A third-party team performs a technical evaluation and finds that the SAP workloads are not fully supported by SAP and AWS. What should the company do to receive full support from SAP and AWS?
A. Purchase an AWS Developer Support plan.
B. Turn on Amazon CloudWatch basic monitoring.
C. Ensure that the /usr/sap file system is running on local instance storage.
D. Ensure that the AWS Data Provider for SAP agent is configured and running.

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company has deployed SAP workloads on AWS. The company’s SAP applications use an IBM Db2 database and an SAP HANA database. An SAP solutions architect needs to create a solution to back up the company’s databases. Which solution will meet these requirements MOST cost-effectively?
A. Use an Amazon Elastic Block Store (Amazon EBS) volume to store backups for the databases. Run a periodic script to move the backups to Amazon S3 and to delete the backups from the EBS volume.
B. Use AWS Backint Agent for SAP HANA to move the backups for the databases directly to Amazon S3.
C. Use an Amazon Elastic Block Store (Amazon EBS) volume to store backups for the Db2 database. Run periodic scripts to move the backups to Amazon S3 and to delete the backups from the EBS volume. For the SAP HANA database, use AWS Backint Agent for SAP HANA to move the backups directly to Amazon S3.
D. Use Amazon Elastic File System (Amazon EFS) to store backups for the databases.

A

C

C is the most cost-effective solution because it leverages the specialized and optimized AWS Backint Agent for SAP HANA backups to S3, which is efficient and designed for this purpose. For the Db2 database, which is not supported by the Backint Agent, it uses EBS for initial storage followed by transfer to the cheaper S3 storage. Options A and B are incorrect because they don’t address both database types optimally. Option A is inefficient and costly due to using EBS for long-term storage of both databases. Option B is incorrect because it only addresses the SAP HANA database and ignores the Db2 database. Option D is incorrect because Amazon EFS is not designed for long-term, cost-effective backup storage. Amazon S3 is significantly more cost effective for archiving backups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A company is running its SAP S/4HANA system on AWS. The company needs to retain database backups for the previous 30 days. The company is taking full online backups by using SAP HANA Studio and is storing the backup files on General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volumes. The company needs to reduce the cost of this storage. What should the company do to achieve the LOWEST cost for the backup storage?
A. Continue to use SAP HANA Studio to back up the SAP HANA database to gp3 EBS volumes. After each backup is completed, use Linux shell scripts to move the backup to Amazon S3. Set up an S3 Lifecycle configuration to delete the backups that are older than 30 days.
B. Continue to use SAP HANA Studio to back up the SAP HANA database. Use Throughput Optimized HDD (st1) EBS volumes to store each backup. After each backup is completed, use Linux shell scripts to move the backup to Amazon S3. Set up an S3 Lifecycle configuration to delete the backups that are older than 30 days.
C. Use AWS Backup to take full online backups of the SAP HANA database.
D. Continue to use SAP HANA Studio to back up the SAP HANA database. Use AWS Backint Agent for SAP HANA to store each backup. Set up an Amazon S3 Lifecycle configuration to delete the backups that are older than 30 days.

A

D

The most cost-effective solution is D because it leverages the AWS Backint Agent for SAP HANA to directly store backups in Amazon S3. Amazon S3 is significantly cheaper than EBS volumes for storing large amounts of data like backups. Options A and B involve storing backups on EBS first, then transferring to S3, incurring extra costs for EBS storage and data transfer. Option C, while using AWS services, does not specifically address the requirement of storing backups for only 30 days and doesn’t directly state that it utilizes S3. Option B is also incorrect as it suggests using st1 (Throughput Optimized HDD) volumes which, while cheaper than gp3, are still more expensive than S3 for this use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company is planning to implement its production SAP HANA database with an XS Advanced runtime environment on AWS. The company must provision the necessary AWS resources and install the SAP HANA database within 1 day to meet an urgent business request. The company must implement a solution that minimizes operational effort. Which combination of steps should the company take to meet these requirements? (Choose two.)
A. Install XS Advanced runtime by using the SAP HANA database lifecycle manager (HDBLCM).
B. Provision AWS resources by using the AWS Management Console. Install SAP HANA by using the SAP HANA database lifecycle manager (HDBLCM).
C. Use AWS Launch Wizard for SAP.
D. Develop and use AWS CloudFormation templates to provision the AWS resources.
E. Evaluate and identify the certified Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volume types for SAP HANA.

A

AC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company has implemented its ERP system on SAP S/4HANA on AWS using Enqueue Standalone Architecture (ENSA2). The system is highly available and utilizes a failover to secondary nodes in a second Availability Zone. During a planned failover test, the initial SAP licenses became invalid. What is the most likely reason for this license invalidation?

A. SAP licenses require manual reapplication after each failover.
B. The cluster configuration is incorrectly set up.
C. Two separate sets of SAP licenses are required for the ASCS instances in each Availability Zone.
D. The secondary node was stopped and restarted during recent maintenance.

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A company recently implemented its SAP S/4HANA system on AWS. An SAP engineer must set up a Pacemaker cluster on Amazon EC2 instances to provide high availability. Which solution will meet this requirement?
A. Set up a fencing mechanism for the cluster by using a block device.
B. Set up an overlay IP address as a public IP address.
C. Create a route to the overlay IP address on the on-premises network.
D. Create an EC2 instance profile that has an IAM role that allows access modification of the route table.

A

D

The correct answer is D because managing the route table is crucial for high availability in a Pacemaker cluster. The IAM role allows the Pacemaker cluster to dynamically adjust routing as needed to switch between active and passive nodes, ensuring continuous operation in case of a failure.

Option A is incorrect because while a fencing mechanism is important for high availability, using a block device isn’t the only or necessarily the best method. Other fencing mechanisms exist and are often preferred.

Option B is incorrect because using a public IP address for the overlay IP is not ideal for security and is not directly related to managing the cluster’s high availability in the context of EC2 instances.

Option C is incorrect because routing to the overlay IP on the on-premises network is not relevant to the high availability setup within the AWS EC2 environment. The focus should be on internal routing within the AWS VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company is planning to migrate its SAP S/4HANA and SAP BW/4HANA workloads to AWS. The company is currently using a third-party solution to back up its SAP HANA database and application and wants to retire this solution after the migration. They need an AWS-based backup solution for their SAP HANA database and application that provides secure storage and cost optimization. Which solution best meets these requirements?

A. Use SAP HANA Studio, SAP HANA HDBSQL, and SAP HANA Cockpit to perform backups to local Amazon Elastic Block Store (Amazon EBS) volumes. Enable EBS volume encryption. Use AWS Backup to perform application backups with AMIs or snapshots to Amazon S3. Enable S3 encryption.

B. Use SAP HANA Cockpit to implement a backup policy and perform SAP HANA database backups to Amazon S3 with AWS Backint Agent for SAP HANA. Enable S3 encryption. Use AWS Backup with backup plans to perform application backups with AMIs or snapshots. Enable S3 encryption.

C. Use AWS Backup with backup plans to perform SAP HANA database backups to Amazon S3 with AWS Backint Agent for SAP HANA. Enable S3 encryption. Use AWS Backup with backup plans to perform application backups with AMIs or snapshots. Enable S3 encryption.

D. Use SAP HANA Studio, SAP HANA HDBSQL, and SAP HANA Cockpit to perform backups to local Amazon Elastic Block Store (Amazon EBS) volumes. Copy the backups to Amazon S3. Use AWS Backup to schedule application backups with AMIs or snapshots to Amazon S3.

A

B

The best solution is B because it leverages the AWS Backint Agent for SAP HANA for database backups directly to S3, which is generally more efficient and cost-effective than options involving EBS and manual copying (A and D). Option C uses AWS Backup for both database and application backups, which, while simpler, might not be as cost-optimized as using the specialized Backint Agent for the database. Option A is inefficient and less secure as it involves manual copying between storage tiers. Option D similarly involves manual steps and thus introduces more room for errors and increased operational cost. Option B provides a balance of using specialized tools where applicable (Backint Agent for the database) while still utilizing AWS Backup for streamlined application backup management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company is migrating its SAP environments (SAP ECC, SAP BW, SAP PI) to AWS and transforming to SAP S/4HANA, implementing SAP Fiori with an SAP Gateway hub and an internet-facing SAP Web Dispatcher. Employees worldwide will access the SAP Fiori launchpad. To allow access only to necessary URLs for SAP Fiori, how should an SAP security engineer design the security architecture?

A. Deploy the SAP Web Dispatcher in a public subnet. Allow access to only the IP addresses that employees use to access the SAP Fiori server.

B. Deploy the SAP Web Dispatcher in a private subnet. Allow access to only the ports that are required for running SAP Fiori.

C. Deploy the SAP Web Dispatcher in a public subnet. Allow access to only the paths that are required for running SAP Fiori.

D. Deploy the SAP Web Dispatcher in a private subnet. Allow access to only the SAP S/4HANA system that serves as the SAP Fiori backend system for the SAP Gateway hub.

A

C

The correct answer is C because it addresses both the need for public accessibility (employees worldwide) and the security requirement (limiting access to only necessary URLs). The SAP Web Dispatcher, acting as the entry point for HTTP(S) requests, needs to be publicly accessible. However, restricting access to only the required paths significantly enhances security by preventing unauthorized access to other system components or functionalities.

Option A is incorrect because managing and controlling access based on individual employee IP addresses globally is impractical and unmanageable. Option B is insufficient because limiting access to ports alone doesn’t restrict access to specific URLs; an attacker could still potentially access unauthorized URLs through allowed ports. Option D is incorrect because it doesn’t address the public accessibility requirement; the Web Dispatcher must be in a public subnet to handle requests from global employees.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A global company is planning to migrate its SAP S/4HANA workloads and SAP BW/4HANA workloads to AWS. The company’s database will not grow more than 3 TB for the next 3 years. The company’s production SAP HANA system has been designed for high availability (HA) and disaster recovery (DR) with the following configurations:
• HA: SAP HANA system replication configured with SYNC mode and LOGREPLAY operation mode across two Availability Zones with the same size SAP HANA node
• DR: SAP HANA system replication configured with ASYNC mode and LOGREPLAY operation mode across AWS Regions with the same size SAP HANA node
All the SAP HANA nodes in the current configuration are the same size. For HA, the company wants an RPO of 0 and an RTO of 5 minutes. For DR, the company wants an RPO of 0 and an RTO of 3 hours.
How should the company design this solution to meet the RPO and RTO requirements MOST cost-effectively?
A. Maintain HA with SAP HANA system replication configured with SYNC mode and table preload turned on across two Availability Zones. In each Availability Zone, use the same size SAP HANA node. Decrease the size of the DR node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with ASYNC mode and table preload turned on. Increase the size of the DR node during a DR event.
B. Maintain HA with SAP HANA system replication configured with SYNC mode and table preload turned on across two Availability Zones. In each Availability Zone, use the same size SAP HANA node. Decrease the size of the DR node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with ASYNC mode and table preload turned off. Increase the size of the DR node during a DR event.
C. Maintain HA with SAP HANA system replication across two Availability Zones. Decrease the size of the HA secondary node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with SYNC mode and table preload turned on. Increase the size of the HA secondary node during an HA event. Decrease the size of the DR node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with table preload turned on. Increase the size of the DR node during a DR event.
D. Maintain HA with SAP HANA system replication across two Availability Zones. Decrease the size of the HA secondary node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with SYNC mode and table preload turned on. Increase the size of the HA secondary node during an HA event. Decrease the size of the DR node to at least 64 GiB of memory or the row store size plus 20 GiB, whichever is higher, with table preload turned off. Increase the size of the DR node during a DR event.

A

B

The best option is B because it achieves the RPO and RTO requirements while optimizing costs. Option B maintains synchronous replication for HA (RPO=0, RTO~5min) and asynchronous replication for DR (RPO>0, RTO~3hrs), which aligns with the requirements. Crucially, it avoids the unnecessary cost of table preload for the DR system. Table preload significantly increases resource usage, especially for a DR system that is only used in failure scenarios. Options A and C include table preload for DR, unnecessarily increasing costs. Options C and D propose reducing the HA secondary node size, which would compromise the HA capabilities and is not cost-effective. Option D also includes table preload for HA which is less optimal than option B because it is not cost-effective for HA as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A company uses SAP S/4HANA as its ERP solution. The company is using AWS Backint Agent for SAP HANA (AWS Backint agent) for backups. Although the configuration is correct for AWS Backint agent, the backups are failing with the following error: NoCredentialProviders: no valid providers in chain. What could be the reason for this error?

A. AWS Systems Manager Agent is not installed on the Amazon EC2 instance.
B. No IAM role is attached to the Amazon EC2 instance.
C. AWS Backint agent binaries are owned by a non-root user.
D. AWS Backint agent is connecting to Amazon S3 with VPC endpoints.

A

B. No IAM role is attached to the Amazon EC2 instance.

The error “NoCredentialProviders: no valid providers in chain” indicates that the AWS Backint Agent cannot find the necessary credentials to authenticate with AWS services, specifically Amazon S3 where the backups are stored. An IAM role attached to the EC2 instance provides these credentials. Without the IAM role, the agent lacks the authorization to access S3 and thus fails.

Option A is incorrect because the Systems Manager Agent is not directly involved in the authentication process for the Backint Agent’s access to S3. Option C is incorrect because while incorrect ownership of binaries could cause other problems, it’s not the direct cause of this specific credential error. Option D is incorrect because using VPC endpoints for S3 access is a security best practice and doesn’t inherently cause authentication failures; the IAM role is still required for authorization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

An SAP engineer is designing an SAP S/4HANA high availability architecture on Linux Amazon EC2 instances in two Availability Zones. The SAP engineer needs to create a solution to achieve high availability and consistency for /usr/sap/trans and /usr/sap/ file systems. Which solution will meet these requirements with the MOST reliability?
A. Set up an NFS server on one of the EC2 instances.
B. Use Amazon Elastic File System (Amazon EFS).
C. Use the EC2 local instance store.
D. Use Amazon Elastic Block Store (Amazon EBS) Multi-Attach.

A

B

The correct answer is B because Amazon EFS provides a scalable and highly available network file system that can be shared across multiple EC2 instances in different Availability Zones. This ensures high availability and consistency for the /usr/sap/trans and /usr/sap/ file systems.

Option A (NFS server on one EC2 instance) creates a single point of failure; if that instance fails, the file system becomes unavailable. Option C (EC2 local instance store) is not shared across instances and is only available to the instance it resides on, making it unsuitable for high availability. Option D (Amazon EBS Multi-Attach) is designed for block storage, not for the shared file system requirements of /usr/sap/trans and /usr/sap/.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A company needs to migrate its SAP HANA landscape from an on-premises data center to AWS. The company’s existing SAP HANA database instance is oversized. The company must resize the database instance as part of the migration. Which combination of steps should the company take to ensure that the target Amazon EC2 instance is sized optimally for the SAP HANA database instance? (Choose two.)
A. Determine the peak memory utilization of the existing on-premises SAP HANA system.
B. Determine the average memory utilization of the existing on-premises SAP HANA system.
C. For the target system, select any SAP certified EC2 instance that provides more memory than the current average memory utilization.
D. For the target system, select the smallest SAP certified EC2 instance that provides more memory than the current peak memory utilization.
E. For the target system, select any current-generation EC2 memory-optimized instance.

A

A, D

A and D are correct because optimal sizing for an SAP HANA instance requires considering peak memory utilization to avoid performance issues during high-demand periods. Option D further refines this by recommending the smallest instance meeting peak requirements for cost-effectiveness.

B is incorrect because basing the size on average memory utilization risks insufficient resources during peak demand. C is incorrect because it doesn’t account for peak memory usage. E is incorrect because while using a memory-optimized instance is generally a good practice, it doesn’t guarantee optimal sizing without considering peak memory requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

A company plans to migrate a critical SAP S/4HANA workload from on-premises hardware to AWS. An SAP solutions architect needs to develop a solution to effectively monitor the SAP landscape on AWS for this workload. The solution must capture resource utilization and must follow a serverless approach to monitor the SAP environment. The solution also must track all the API calls that are made within the company’s AWS account. Which combination of steps should the SAP solutions architect take to meet these requirements? (Choose two.)
A. Configure Amazon CloudWatch detailed monitoring for the AWS resources in the SAP landscape. Use AWS Lambda, and create the Lambda layer “sapjco” for the SAP Java Connector. Deploy the solution with AWS Serverless Application Repository for sap-monitor.
B. Set up a Multi-AZ deployment of SAP on AWS. Use Amazon EC2 Auto Scaling to add or remove EC2 instances automatically based on the CPU utilization of the SAP instance.
C. Use AWS CloudTrail to log and retain account activity related to actions across the SAP on AWS infrastructure.
D. Use the AWS Personal Health Dashboard to get a personalized view of performance and availability of the underlying AWS resources.
E. Use AWS Trusted Advisor to optimize the AWS infrastructure and to improve security and performance.

A

A, C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A company is planning to deploy SAP HANA on AWS. The block storage that hosts the SAP HANA data volume must have at least 64,000 IOPS per volume and must have a maximum throughput of at least 500 MiB/s per volume. Which Amazon Elastic Block Store (Amazon EBS) volume meets these requirements?
A. General Purpose SSD (gp2) EBS volume
B. General Purpose SSD (gp3) EBS volume
C. Provisioned IOPS SSD (io2) EBS volume
D. Throughput Optimized HDD (st1) EBS volume

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

An SAP engineer is configuring AWS Backint Agent for SAP HANA (AWS Backint agent) for an SAP HANA database running on an Amazon EC2 instance. After configuration, backups fail. The AWS Backint agent logs contain numerous “AccessDenied” messages. Which two actions should the SAP engineer take to resolve this issue?

A. Update the EC2 role permissions to allow S3 bucket access.
B. Verify that the configuration file has the correct formatting of the S3BucketOwnerAccountID.
C. Install AWS Systems Manager Agent (SSM Agent) correctly by using the sudo command.
D. Install the correct version of Python for AWS Backint agent.
E. Add the execute permission to the AWS Backint agent binary.

A

A, B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A company is running its production SAP HANA system on an Amazon EC2 instance running SUSE Linux Enterprise Server 12. The operating system patch version is out of date, and SAP has identified critical security vulnerabilities. SUSE has published a critical patch update requiring a system restart. The company must apply this patch and many prerequisites. Which solution minimizes system downtime?

A. Use the SUSE Linux Enterprise Server patching update process and SUSE tools to apply the required patches to the existing instance.
B. Use AWS Systems Manager Patch Manager to automatically apply the patches to the existing instance.
C. Use AWS Launch Wizard for SAP to provision a second SAP HANA instance with an AMI that contains the required patches. Use SAP HANA system replication to copy the data from the original SAP HANA instance to the newly launched SAP HANA instance. Perform SAP HANA system replication takeover.
D. Use AWS Launch Wizard for SAP to provision a second SAP HANA instance with an AMI that contains the required patches. Use SAP HANA native backup and restore to copy the data from the original SAP HANA instance to the newly launched SAP HANA instance.

A

C

The correct answer is C because it leverages SAP HANA system replication for near-zero downtime. Creating a new instance with the patches already applied, and then replicating the data using system replication, allows for a quick switchover with minimal interruption.

Option A and B will require a system restart, resulting in significant downtime. Option D, while using a new patched instance, uses backup and restore which is a much slower process than system replication, leading to more downtime.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A company recently implemented an architecture in which all the systems and components of the company’s SAP environment are hosted on AWS. Front-end users connect from the corporate data center. SAP application servers and database servers are hosted in a private subnet. The company has the following requirements:
• Ensure that the instances in the private subnet can connect to the internet and other AWS services.
• Prevent instances from receiving inbound traffic that is initiated by someone on the internet.
• For SAP support, allow a remote connection between the company’s network and SAP. Ensure that access is available to the production environment as needed.
Which solution will meet these requirements?
A. Use a NAT gateway to ensure connectivity between the instances in the private subnet and other AWS services. Deploy SAProuter in a public subnet. Assign a public IP address that is reachable from the internet.
B. Use NAT instances to ensure connectivity between the instances in the private subnet and other AWS services. Deploy SAProuter in the private subnet with an Elastic IP address that is reachable from the internet.
C. Use a bastion host to ensure connectivity between the instances in the private subnet and other AWS services. Set up an AWS Direct Connect connection between the SAP support network and the AWS Region where the architecture is implemented.
D. Use an internet gateway to ensure connectivity between the instances in the private subnet and other AWS services. Deploy SAProuter in a public subnet. Assign a public IP address that is reachable from the internet.

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A company plans to migrate its SAP systems from on-premises to AWS to reduce infrastructure costs and is willing to commit for 3 years. They require maximum flexibility in selecting Amazon EC2 instances across AWS Regions, instance families, and instance sizes. Which purchasing option offers the lowest cost while meeting these requirements?

A. Spot Instances
B. 3-year Compute Savings Plan
C. 3-year EC2 Instance Savings Plan
D. 3-year Reserved Instances

A

B

The correct answer is B, the 3-year Compute Savings Plan. The discussion highlights that Compute Savings Plans offer flexibility to change instance families, sizes, operating systems, and AWS regions without losing cost savings, perfectly aligning with the company’s needs. Spot Instances (A) are unsuitable for production SAP systems due to their unpredictable nature. EC2 Instance Savings Plans (C) and Reserved Instances (D) while offering discounts, lack the flexibility to change instance types and regions as readily as Compute Savings Plans. The cited AWS documentation further supports this conclusion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
A company runs its SAP Business Suite on SAP HANA systems on AWS. The company's production SAP ERP Central Component (SAP ECC) system uses an x1e.32xlarge (memory optimized) Amazon EC2 instance and is 3.5 TB in size. Because of expected future growth, the company needs to resize the production system to use a u-* EC2 High Memory instance. The company must resize the system as quickly as possible and must minimize downtime during the resize activities. Which solution will meet these requirements? A. Resize the instance by using the AWS Management Console or the AWS CLI. B. Create an AMI of the source system. Launch a new EC2 High Memory instance that is based on that AMI. C. Launch a new EC2 High Memory instance. Install and configure SAP HANA on the new instance by using AWS Launch Wizard for SAP. Use SAP HANA system replication to migrate the data to the new instance. D. Launch a new EC2 High Memory instance. Install and configure SAP HANA on the new instance by using AWS Launch Wizard for SAP. Use SAP HANA backup and restore to back up the source system directly to Amazon S3 and to migrate the data to the new instance.
C C is the correct answer because SAP HANA system replication minimizes downtime during the migration process by synchronizing the source and target systems. This is significantly faster than creating an AMI (option B) or performing a backup and restore (option D), both of which require significant downtime. Option A is incorrect because resizing an instance in place doesn't address the need for a different instance type (u-* instead of x1e).
26
A company deploys its SAP ERP system on AWS in a highly available configuration across two Availability Zones. The cluster is configured with an overlay IP address and a Network Load Balancer (NLB) to provide access to the SAP application layer to all users. The company's analytics team has created several Operational Data Provisioning (ODP) extractor services for the SAP ERP system. A highly available ETL system will call the ODP extractor services. The ETL system is hosted on Amazon EC2 instances that are deployed in an analytics VPC in a different AWS account. An SAP solutions architect needs to prevent the ODP extractor services from being used as an attack vector to overload the SAP ERP system. Which solution will provide the MOST protection for the ODP extractor services? A. Configure VPC peering between the SAP VPC and the analytics VPC. Use network ACL rules in the SAP VPC to allow traffic to the NLB from only authorized sources: the analytics VPC CIDR block and the SAP end users' network CIDR block. B. Create a transit gateway in the SAP account. Share the transit gateway with the analytics account. Attach the SAP VPC and the analytics VPC to the transit gateway. Use network ACL rules in the SAP VPC to allow traffic to the NLB from only authorized sources: the analytics VPC CIDR block and the SAP end users' network CIDR block. C. Configure VPC peering between the SAP VPC and the analytics VPC. Update the NLB security group rules to accept traffic only from authorized sources: the ETL instances CIDR block and the SAP end users' network CIDR block. D. Create a VPC endpoint service configuration on the SAP VPC. Specify the NLB in the endpoint configuration. In the analytics account, create an IAM role that has permission to create a connection to the endpoint service. Attach the role to the ETL instances. While logged in to the ETL instances, programmatically create an interface endpoint to the NLB. Accept the request to activate the interface connection.
D The discussion indicates that option D, using a VPC endpoint service with an IAM role, is the most secure solution. This approach provides the most granular control over access, limiting access to only the specific ETL instances with the correct IAM role. Options A, B, and C rely on network-level ACLs or security groups, which are less restrictive and offer less granular control compared to IAM-based access control. While options A and B offer some level of security through network segmentation, they still allow broader access than option D. Option C is vulnerable as it only protects the NLB, and not the entire system behind it, leaving room for potential vulnerabilities. Therefore, D offers the most protection by tightly controlling access at the instance level using IAM roles, minimizing the risk of the ODP services being used as an attack vector.
27
A company wants to migrate a native SAP HANA database to AWS. The database ingests large amounts of data every month, and the size of the database is growing rapidly. The company needs to store data for 10 years to meet a regulatory requirement. The company uses data from the last 2 years frequently in several reports. This recent data is critical and must be accessed quickly. The data that is 3-6 years old is used a few times a year and can be accessed in a longer time frame. The data that is more than 6 years old is rarely used and also can be accessed in a longer time frame. Which combination of steps will meet these requirements? (Choose three.) A. Keep the frequently accessed data from the last 2 years in a hot tier on an SAP HANA certified Amazon EC2 instance. B. Move the frequently accessed data from the last 2 years to SAP Information Life Cycle Management (ILM) with SAP IQ. C. Move the less frequently accessed data that is 3-6 years old to a warm tier on Amazon Elastic File System (Amazon EFS) by using SAP HANA dynamic tiering. D. Move the less frequently accessed data that is 3-6 years old to a warm tier on Amazon Elastic File System (Amazon EFS) by using data aging. E. Move the rarely accessed data that is more than 6 years old to a cold tier on Amazon S3 by using SAP Data Hub. F. Move the rarely accessed data that is more than 6 years old to a cold tier on SAP BW Near Line Storage (NLS) with Apache Hadoop.
A, C, E A is correct because keeping frequently accessed data (last 2 years) on a fast, hot tier like an SAP HANA-certified EC2 instance ensures quick access for critical reports. B is incorrect because moving frequently accessed data to SAP ILM with SAP IQ introduces latency; it's better suited for less frequently accessed data. C is correct because using SAP HANA dynamic tiering to move less frequently accessed data (3-6 years old) to a warm tier on Amazon EFS provides a balance between cost and access time. Amazon EFS is a suitable warm tier storage option for SAP HANA. D is incorrect because data aging alone doesn't define the storage tier. A storage tier (like warm) needs to be explicitly chosen. E is correct because Amazon S3 is a cost-effective cold storage solution for rarely accessed data (over 6 years old), meeting the long-term archival requirement. SAP Data Hub can be used for data management and integration with S3. F is incorrect because while SAP BW NLS with Apache Hadoop can be used for long-term storage, it's less straightforward to integrate with SAP HANA compared to the other options and less cost effective than S3 for this scenario.
28
An SAP engineer is designing a storage configuration for an SAP S/4HANA production system on AWS. The system will run on an Amazon EC2 instance with a memory size of 2 TB. The SAP HANA sizing report recommends storage of 2,400 GB for data and 512 GB for logs. The system requires 9,000 IOPS for data storage and throughput of 300 MBps for log storage. Which Amazon Elastic Block Store (Amazon EBS) volume configuration will meet these requirements MOST cost-effectively? A. For /hana/data, use two 900 GB Provisioned IOPS SSD (io1) EBS volumes that are configured with RAID 0 striping and the required IOPS. For /hana/log, use one 512 GB General Purpose SSD (gp3) EBS volume that is configured with the required throughput. B. For /hana/data, use one 2,400 GB General Purpose SSD (gp3) EBS volume that is configured with the required IOPS. For /hana/log, use one 512 GB gp3 EBS volume that is configured with the required throughput. C. For /hana/data use two 1,200 GB Provisioned IOPS SSD (io2) EBS volumes that are configured with RAID 0 striping and the required IOPS. For /hana/log, use one 525 GB io2 EBS volume that is configured with the required throughput. D. For /hana/data, use two 1,200 GB General Purpose SSD (gp3) EBS volumes that are configured with RAID 0 striping and the required IOPS. For /hana/log, use one 512 GB gp3 EBS volume that is configured with the required throughput.
D The discussion indicates that option D is the most cost-effective solution. Option B uses a single 2400GB volume which, while meeting the storage requirements, may be more expensive than using two smaller volumes in RAID 0. Option A and C use io1 and io2 volumes respectively, which are more expensive than gp3 volumes for this use case. Option D uses two 1200GB gp3 volumes in RAID 0, providing the necessary 9000 IOPS (4500 IOPS per volume * 2 volumes) at a lower cost than other options.
29
An SAP consultant is planning a migration of an on-premises SAP landscape to AWS. The landscape includes databases from Oracle, IBM Db2, and Microsoft SQL Server. The system copy procedure accesses the copied data on the destination system to complete the copy. Which password must the SAP consultant obtain from the source system before initiating the export or backup? A. The password of the adm operating system user B. The password of the SAP* user in client 000 C. The password of the administrator user of the database D. The password of the DDIC user in client 000
C The correct answer is C because the system copy procedure requires administrative access to the database to perform its tasks, such as exporting or backing up the data. Options A, B, and D are incorrect because they relate to operating system or SAP system users, not the database administrator credentials required for database-level operations during the system copy.
30
A company migrated its SAP workload to AWS. Their SAP engineer implemented SAProuter on an Amazon EC2 instance running SUSE Linux Enterprise Server in a public subnet as an On-Demand Instance. After initial testing, the EC2 instance was stopped. When restarted, the SAP Support team could not connect to SAProuter. What should the engineer do to permanently resolve this issue? A. Re-install SAProuter on an EC2 instance in a private subnet. Update the SAProuter configuration with the instance's private IP address. Deploy a managed NAT gateway for AWS. Route SAP connectivity through the NAT gateway. B. Allocate an Elastic IP address to the EC2 instance that hosts SAProuter. Update the SAP router configuration with the Elastic IP address. C. Modify the security group associated with the EC2 instance that hosts SAProuter to allow access to all ports from the 0.0.0.0/0 CIDR block. D. Update the SAProuter configuration with the private IP address of the EC2 instance that hosts SAProuter.
B The correct answer is B because the problem stems from the dynamic nature of public IP addresses assigned to EC2 instances. When the EC2 instance is stopped and restarted, it receives a new public IP address. Therefore, the SAProuter configuration, which likely used the original public IP address, becomes invalid after the restart. An Elastic IP address remains associated with the instance even after restarts, solving the connectivity problem. Option A is incorrect because moving to a private subnet doesn't address the root cause—the changing public IP. While using a NAT gateway is good practice for security, it's not necessary to resolve the immediate problem of the changing IP address. Option C is incorrect because opening all ports to all sources from 0.0.0.0/0 is extremely insecure and not a best practice for resolving the issue. It exposes the SAProuter to unnecessary risks. Option D is incorrect because using the private IP address will not work for external connectivity, which is what SAP Support requires. SAProuter needs a publicly accessible IP address to connect.
31
A company is evaluating options to migrate its on-premises SAP ERP Central Component (SAP ECC) EHP 8 system to AWS. The company does not want to make any changes to the SAP versions or database versions. The system runs on SUSE Linux Enterprise Server and SAP HANA 2.0 SPS 05. The existing on-premises system has a 1 TB database. The company has 1 Gbps of internet bandwidth available for the migration. The company must complete the migration with the least possible downtime and disruption to business. Which solution will meet these requirements? A. Install SAP ECC EHP 8 on Amazon EC2 instances. Use the same SAP SID and kernel version that the source system uses. Install SAP HANA on EC2 instances. Use the same version of SAP HANA that the source system uses. Take a full backup of the source SAP HANA database to disk. Copy the backup by using an AWS Storage Gateway Tape Gateway. Restore the backup on the target SAP HANA instance that is running on Amazon EC2. B. Install SAP ECC EHP 8 on Amazon EC2 instances. Use the same SAP SID and kernel version that the source system uses. Install SAP HANA on EC2 instances. Use the same version of SAP HANA that the source database uses. Establish replication at the source, and register the SAP HANA instance that is running on Amazon EC2 as secondary. After the systems are synchronized, initiate a takeover so that the SAP HANA instance that is running on Amazon EC2 becomes primary. Shut down the on-premises system. Start SAP on the EC2 instances. C. Install SAP ECC EHP 8 on Amazon EC2 instances. Use the same SAP SID and kernel version that the source system uses. Install SAP HANA on EC2 instances. Use the same version that the source system uses. Take a full offline backup of the source SAP HANA database. Copy the backup to Amazon S3 by using the AWS CLI. Restore the backup on a target SAP HANA instance that runs on Amazon EC2. Start SAP on the EC2 instances. D. Take an offline SAP Software Provisioning Manager export of the on-premises system. Use an AWS Storage Gateway File Gateway to transfer the export. Import the export on Amazon EC2 instances to create the target SAP system.
B The correct answer is B because it utilizes SAP HANA system replication. This method minimizes downtime by synchronizing data between the on-premises system and the AWS system before a switchover. Once synchronization is complete, the AWS system becomes primary with minimal disruption. Options A and C involve taking a full backup and restoring it, which will result in significant downtime. Option D, using an SAP Software Provisioning Manager export, is also likely to cause substantial downtime due to the length of the export and import process.
32
A company is migrating its SAP S/4HANA landscape from on premises to AWS. An SAP solutions architect is designing a backup solution for the SAP S/4HANA landscape on AWS using AWS Backint Agent to store backups in Amazon S3. The company's backup policy requires a retention period of 150 days for weekly full online backups and 30 days for daily transaction log backups. The company needs to maintain this policy on AWS while maximizing data resiliency and retrieving backup data within 10 hours, one or two times per year. Which solution will meet these requirements MOST cost-effectively? A. Configure the target S3 bucket to use S3 Glacier Deep Archive for the backup files. Create S3 Lifecycle rules on the S3 bucket to delete full online backup files older than 150 days and log backup files older than 30 days. B. Configure the target S3 bucket to use S3 Standard storage for the backup files. Create an S3 Lifecycle rule to move all backup files to S3 Glacier Instant Retrieval. Create additional S3 Lifecycle rules to delete full online backup files older than 150 days and log backup files older than 30 days. C. Configure the target S3 bucket to use S3 One Zone-Infrequent Access (S3 One Zone-IA) for the backup files. Create S3 Lifecycle rules to move full online backup files older than 30 days to S3 Glacier Flexible Retrieval and to delete log backup files older than 30 days. Create an additional S3 Lifecycle rule to delete full online backup files older than 150 days. D. Configure the target S3 bucket to use S3 Standard-Infrequent Access (S3 Standard-IA) for the backup files. Create S3 Lifecycle rules to move full online backup files older than 30 days to S3 Glacier Flexible Retrieval and to delete log backup files older than 30 days. Create an additional S3 Lifecycle rule to delete full online backup files older than 150 days.
D The discussion indicates that option D is the most cost-effective solution. Option A is ruled out because Glacier Deep Archive retrieval times exceed the 10-hour requirement. Option B is deemed expensive due to using S3 Standard storage initially. Option C is less cost-effective than D because it uses S3 One Zone-IA, which offers less durability than S3 Standard-IA. Option D balances cost and performance by using S3 Standard-IA for readily available backups and transitioning older backups to S3 Glacier Flexible Retrieval, which offers retrieval within the specified timeframe.
33
A global retail company running its SAP S/4HANA workload on AWS needs a solution to extract historical data for analytics and sales forecasting in new geographies. Which solution requires the LEAST custom development effort to extract data from SAP S/4HANA to Amazon S3, perform analytics, and generate sales forecasts? A. Use AWS AppSync to extract data from SAP S/4HANA and store it in Amazon S3. Use AWS Glue for analytics and Amazon Forecast for sales forecasts. B. Use the SAP Landscape Transformation (LT) Replication Server SDK to extract data, integrate it with SAP Data Services, and store it in Amazon S3. Use Amazon Athena for analytics and Amazon Forecast for sales forecasts. C. Use Amazon AppFlow to extract data from SAP S/4HANA and store it in Amazon S3. Use Amazon QuickSight for analytics and Amazon Forecast for sales forecasts. D. Integrate AWS Glue and AWS Lambda with the SAP Operational Data Provisioning (ODP) Framework to extract data from SAP S/4HANA and store it in Amazon S3. Use Amazon QuickSight for analytics and Amazon Forecast for sales forecasts.
C The discussion indicates that option C, utilizing Amazon AppFlow, Amazon QuickSight, and Amazon Forecast, requires the least custom development. Options B and D necessitate more complex setups and integrations (SAP LT Replication, SAP Data Services, SAP ODP Framework), involving significant configuration and potentially custom code. Option A, using AWS AppSync, is not specifically designed for large-scale data extraction from SAP S/4HANA and might be less efficient than AppFlow for this purpose. Therefore, option C provides the most straightforward and low-effort solution meeting all requirements.
34
A company's SAP solutions architect needs to design a highly available SAP S/4HANA application architecture on AWS. The SAP NetWeaver ASCS, SAP NetWeaver PAS, and SAP HANA database components will run on separate Amazon EC2 instances in a VPC with a 10.0.0.0/24 CIDR block, using two subnets in different Availability Zones. Each EC2 instance will run Red Hat Enterprise Linux. Which set of overlay IP addresses can the architect use to ensure high availability for the SAP NetWeaver ASCS and SAP HANA components? A. Two overlay IP addresses: 10.0.0.50 for SAP ASCS and 10.0.0.54 for SAP HANA B. Two overlay IP addresses: 192.168.0.50 for SAP ASCS and 192.168.0.54 for SAP HANA C. Three overlay IP addresses: 10.0.0.50 for SAP ASCS, 10.0.0.52 for SAP ERS, and 10.0.0.54 for SAP HANA D. Three overlay IP addresses: 192.168.0.50 for SAP ASCS, 192.168.0.52 for SAP ERS, and 192.168.0.54 for SAP HANA
D The correct answer is D because overlay IP addresses must be outside the VPC's CIDR block. Options A and C use IP addresses (10.0.0.50, 10.0.0.52, 10.0.0.54) that fall within the 10.0.0.0/24 CIDR range, making them invalid. Option B uses a private IP range (192.168.0.0/16), which while outside the VPC's CIDR, is also generally a private range and is not recommended for this scenario. Option D correctly uses IP addresses (192.168.0.50, 192.168.0.52, 192.168.0.54) outside the VPC's CIDR block. While a private IP range, it's sufficiently outside the defined VPC range to avoid conflicts and is a commonly used range for overlay IP addresses when not explicitly allocating public IPs.
35
A company is running its on-premises SAP ERP Central Component (SAP ECC) system on an Oracle database on Oracle Enterprise Linux. The database is 1 TB in size and uses 27,000 IOPS for its peak performance. Multiple SSD volumes are striped to store Oracle data files in separate sapdata directories to gain the required IOPS. The company is planning to move this workload to AWS. The company chooses high I/O bandwidth instances with a Nitro hypervisor to host the target database instance. Downtime is not a constraint for the migration. The company needs an Amazon Elastic Block Store (Amazon EBS) storage layout that optimizes cost for the migration. How should the company reorganize the Oracle data files to meet these requirements? A. Reorganize the Oracle data files into one 9 TB General Purpose SSD (gp2) EBS volume. B. Reorganize the Oracle data files into a striped volume of three 3 TB General Purpose SSD (gp2) EBS volumes. C. Reorganize the Oracle data files into one 1 TB General Purpose SSD (gp3) EBS volume with 27,000 provisioned IOPS. D. Reorganize the Oracle data files into ten 100 GB General Purpose SSD (gp3) EBS volumes.
B The best answer is B. While D might seem cost-effective due to the lower cost of gp3 volumes, the question doesn't specify that the volumes are striped or arranged in a RAID configuration to achieve the necessary 27,000 IOPS. Option B explicitly states that the volumes are striped, allowing them to combine their IOPS. Each 3TB gp2 volume provides a maximum of approximately 9,000 IOPS, and three volumes striped together would meet the 27,000 IOPS requirement. Options A and C are not cost-effective because they provide significantly more storage than needed (9TB and 1TB respectively). Option C is also incorrect because gp3 volumes have a maximum IOPS limit of 16,000, not enough to meet the 27,000 IOPS requirement. Therefore, B is the most accurate and practical solution.
36
A company has grown rapidly, increasing the data volume and resource requirements for its SAP HANA database on AWS. The SAP HANA database is a scale-up installation. To address this, the company plans to change its Amazon EC2 instance type to a virtual EC2 High Memory instance and its Amazon EBS volume type to a higher-performance volume type. The EC2 instance is a current-generation instance both before and after the change, and both the EC2 instance and the EBS volume meet all prerequisites for the changes. An SAP basis administrator must advise the company on whether these changes will require SAP system downtime. Which guidance should the SAP basis administrator provide? A. The change in EC2 instance type does not require SAP system downtime, but the change in EBS volume type requires SAP system downtime. B. The change in EC2 instance type requires SAP system downtime, but the change in EBS volume type does not require SAP system downtime. C. Neither the change in EC2 instance type nor the change in EBS volume type requires SAP system downtime. D. Both the change in EC2 instance type and the change in EBS volume type require SAP system downtime.
B The correct answer is B because changing the EC2 instance type requires stopping the instance, resulting in SAP system downtime. However, changing the EBS volume type can be done online without requiring a system stop. Option A is incorrect because EBS volume type changes can generally be performed without downtime. Option C is incorrect because changing the EC2 instance type necessitates downtime. Option D is incorrect because only the EC2 instance type change requires downtime.
37
An SAP solutions architect needs to design a highly available solution to support a 12 TB SAP HANA system on AWS. The solution will be deployed in a single AWS Region. Which solution will meet these requirements MOST cost-effectively? A. Use an SAP certified high availability cluster solution and SAP HANA backup and restore. B. Use an SAP certified high availability cluster solution and SAP HANA system replication with data preload. C. Use an SAP certified high availability cluster solution and multi-tiered SAP HANA system replication. D. Use an SAP certified high availability cluster solution and storage replication with AWS Elastic Disaster Recovery.
B The most cost-effective solution is B because SAP HANA system replication with data preload provides fast recovery with minimal additional resource consumption. While option A is less complex, recovery time is significantly longer. Option C is designed for both high availability and disaster recovery, adding unnecessary complexity and cost for a single-region solution. Option D introduces the additional cost and complexity of AWS Elastic Disaster Recovery, which is not required for a high-availability solution within a single region. Although some commenters raised concerns about the cost of data preload requiring a similarly sized standby instance, the overall speed and reduced downtime associated with this method outweigh the cost of the larger instance in the context of the question's focus on *most* cost-effective.
38
An SAP database analyst installs AWS Backint Agent for SAP HANA (AWS Backint agent) by using AWS Systems Manager. The SAP database analyst runs an initial test to perform a database backup for a 512 GB SAP HANA database. The database runs on an SAP certified Amazon EC2 instance type with General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volumes for all disk storage. The backup is running too slowly. Which actions should the SAP database analyst take to improve the performance of AWS Backint agent? (Choose two.) A. Set the `parallel_data_backup_backint_channels` parameter to a number greater than 1. B. Select a Provisioned IOPS SSD (io2) volume as the backup target for AWS Backint agent. C. Delete unnecessary older backup files from backups that SAP Backint agent performed. D. Change the existing gp2-based SAP HANA data volumes to the Provisioned IOPS SSD (io2) EBS volume type. E. Reinstall AWS Backint agent by using the AWS Backint installer rather than the Systems Manager document.
A, D
39
A company is planning to retire a data center where a few legacy SAP applications run. The applications are SAP R/3 4.6C with a Microsoft SQL Server 2005 database and are running on Windows Server 2008. The applications are outside the extended maintenance period. There is no SAP support for the applications. The company has no plans to upgrade the applications or move the applications to a different platform. The company does not have a policy to maintain installation media for any of the applications. The company wants to migrate the applications to AWS. How can the company migrate the applications to AWS? A. Use AWS Launch Wizard for SAP to launch the applications on AWS. Migrate the applications by using backup and restore. B. Perform an SAP system copy from the source to the target by using SAP Software Provisioning Manager. C. Use AWS Application Migration Service to migrate the applications. D. Manually install the applications on AWS. Perform a database synchronization from the source to the target.
C
40
A company is running SAP on premises and is using hard disk drive (HDD) cost-optimized storage to store SAP HANA archive files. The company directly mounts these disks as local file systems. The company also backs up the archives on a regular basis. The company needs to migrate this setup to AWS. Which solution will meet these requirements MOST cost-effectively? A. Use General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volumes as the archive destination. Use Amazon S3 for backups. Use S3 Glacier for long-term retention of the archives. B. Use Provisioned IOPS SSD (io1) Amazon Elastic Block Store (Amazon EBS) volumes as the archive destination. Back up the archives to Cold HDD (sc1) EBS volumes. C. Use Provisioned IOPS SSD (io1) Amazon Elastic Block Store (Amazon EBS) volumes as the archive destination. Use Amazon S3 for backups. Use S3 Glacier for long-term retention of the archives. D. Use Cold HDD (sc1) Amazon Elastic Block Store (Amazon EBS) volumes as the archive destination. Use Amazon S3 for backups. Use S3 Glacier for long-term retention of the archives.
D The discussion section overwhelmingly supports option D as the most cost-effective solution. Options A, B, and C utilize more expensive storage tiers (gp2, io1) for the archive destination than option D, which uses the cost-optimized Cold HDD (sc1) EBS volumes. While performance is a factor in choosing storage, the question prioritizes cost-effectiveness and doesn't specify performance requirements. Using S3 for backups and Glacier for long-term archival is a common and cost-effective strategy regardless of the chosen archive destination.
41
A company migrated its SAP environment to AWS 6 months ago. The landscape consists of a few thousand Amazon EC2 instances for production, development, quality, and sandbox environments. The company wants to minimize the operational cost of the landscape without affecting system performance and availability. Which solutions will meet these requirements? (Choose two.) A. Scale down the EC2 instance size for non-production environments. B. Create an AWS Systems Manager document to automatically stop and start the SAP systems. Use Amazon CloudWatch to automate the scheduling of this task. C. Review the billing data for the EC2 instances. Analyze the workload, and choose an EC2 Instance Savings Plan. D. Create an AWS Systems Manager document to automatically stop and start the SAP systems and EC2 instances for non-production environments outside business hours. Use Amazon EventBridge to automate the scheduling of this task. E. Create an AWS Systems Manager document to automatically stop and start the SAP systems and EC2 instances. Maintain the schedule in the Systems Manager document to automate this task.
C, D The correct answers are C and D because they address cost optimization without compromising performance and availability. C: Choosing an EC2 Instance Savings Plan based on workload analysis offers significant cost savings without impacting performance. This leverages existing usage patterns for better pricing. D: Stopping non-production EC2 instances and SAP systems outside business hours significantly reduces costs without affecting production. Using EventBridge for scheduling is more flexible and efficient than other options. A is incorrect because scaling down non-production EC2 instance sizes without analyzing workload might negatively impact performance if the chosen size is too small. B is incorrect because it only stops and starts SAP systems, not the underlying EC2 instances which would still incur costs. Also, CloudWatch, while useful, is not ideal for scheduling tasks that need complex conditions such as outside business hours. E is incorrect because it doesn't consider the crucial distinction between production and non-production environments. Stopping production systems would be disastrous. Additionally, it utilizes Systems Manager for scheduling, which is less flexible than EventBridge for managing complex scheduling requirements.
42
A company is running its SAP S/4HANA production system on AWS. The system is 5 TB in size and has a high performance and IOPS demand for the SAP HANA data storage. The company is using Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2) storage with burstable IOPS to meet this demand. An SAP solutions architect needs to review the current storage layout and recommend a more cost-effective solution without compromising storage performance. What should the SAP solutions architect recommend to meet these requirements? A. Switch from burstable IOPS to allocated IOPS for the gp2 storage. B. Replace all the gp2 storage with Provisioned IOPS SSD (io2) storage. C. Replace all the gp2 storage with gp3 storage. Configure the required IOPS. D. Replace all the gp2 storage with gp3 storage at baseline IOPS.
C
43
A company's SAP solutions architect is configuring a network architecture for an SAP HANA multi-node environment. The company requires isolation of the logical network zones: client, internal, and storage. The database runs on X1 (memory optimized) Amazon EC2 instances and uses Amazon Elastic Block Store (Amazon EBS) volumes for persistent storage. Which combination of actions will provide the required isolation? (Choose three.) A. Attach an AWS Network Firewall policy for each zone to the subnet for the node cluster. B. Attach a secondary elastic network interface to each instance for the internal communications between nodes. C. Attach a secondary elastic network interface to each instance for the storage communications. D. Configure a security group with rules that allow only TCP connections within the security group on the ports that are assigned for the internal network connections. Associate the security group with the appropriate elastic network interface on each instance. E. Configure a security group with rules that allow only TCP connections with the external customer network on the ports that are assigned for the client connections. F. Configure a security group with rules that allow Non-Volatile Memory Express (NVMe) connections within the subnet range. Associate the security group with the appropriate elastic network interface on each instance.
BCD B. Attach a secondary elastic network interface to each instance for the internal communications between nodes. This provides a dedicated network interface for internal node communication, isolating it from other traffic. C. Attach a secondary elastic network interface to each instance for the storage communications. This isolates storage traffic from other network traffic, enhancing security and performance. D. Configure a security group with rules that allow only TCP connections within the security group on the ports that are assigned for the internal network connections. Associate the security group with the appropriate elastic network interface on each instance. This restricts access to the internal network to only authorized instances within the security group, ensuring isolation. A is incorrect because while Network Firewalls provide network-level isolation, they are not as granular as security groups for isolating specific network zones within a subnet. E is incorrect because it only addresses client network isolation, not internal or storage. F is incorrect because NVMe connections are handled at the instance level and don't require special security group rules at the network level; storage isolation is achieved using option C and potentially additional mechanisms like dedicated storage subnets.
44
A company is preparing a greenfield deployment of SAP S/4HANA on AWS. The company wants to ensure that this new SAP S/4HANA landscape is fully supported by SAP. The company's SAP solutions architect needs to set up a new SAProuter connection directly to SAP from the company's landscape within the VPC. Which combination of steps must the SAP solutions architect take to accomplish this goal? (Choose three.) A. Launch the instance that the SAProuter software will be installed on into a private subnet of the VPC. Assign the instance an Elastic IP address. B. Launch the instance that the SAProuter software will be installed on into a public subnet of the VPC. Assign the instance an Elastic IP address. C. Launch the instance that the SAProuter software will be installed on into a public subnet of the VPC. Assign the instance an overlay IP address. D. Create a specific security group for the SAProuter instance. Configure rules to allow the required inbound and outbound access to the SAP support network. Include a rule that allows inbound traffic to TCP port 3299. E. Create a specific security group for the SAProuter instance. Configure rules to allow the required inbound and outbound access to the SAP support network. Include a rule that denies inbound traffic to TCP port 3299. F. Use a Secure Network Communication (SNC) internet connection.
BDF The correct answer is BDF because: * **B:** The SAProuter needs to be accessible from the internet for SAP support to connect. A public subnet allows this, and an Elastic IP provides a static, public IP address for consistent connectivity. A private subnet would not be accessible from the internet. * **D:** A specific security group is crucial for securing the SAProuter instance. It must allow inbound traffic on TCP port 3299 (the standard port for SAProuter communication) to allow SAP support to connect. Option E is incorrect because it explicitly *denies* this necessary inbound traffic. * **F:** Secure Network Communication (SNC) is a recommended security protocol for SAP system communication, enhancing the security of the connection to SAP Support. Options A and C are incorrect because they involve private subnets (A) or an overlay IP address (C), both of which would prevent the SAProuter from being accessible to the public internet. Option E is incorrect because it would block necessary inbound connections.
45
A company is running its on-premises SAP ERP Central Component (SAP ECC) production system on an Oracle database. The company needs to migrate the system to AWS and change the database to SAP HANA on AWS. The system must be highly available. The company also needs a failover system to be available in a different AWS Region to support disaster recovery (DR). The DR solution must meet an RTO of 4 hours and an RPO of 30 minutes. The sizing estimate for the SAP HANA database on AWS is 4 TB. Which combination of steps should the company take to meet these requirements? (Choose two.) A. Deploy the production system and the DR system in two Availability Zones in the same Region. B. Deploy the production system across two Availability Zones in one Region. Deploy the DR system in a third Availability Zone in the same Region. C. Deploy the production system across two Availability Zones in the primary Region. Deploy the DR system in a single Availability Zone in another Region. D. Create an Amazon Elastic File System (Amazon EFS) file system in the primary Region for the SAP global file system. Deploy a second EFS file system in the DR Region. Configure EFS replication between the file systems. E. Set up Amazon Elastic Block Store (Amazon EBS) to store the shared file system data. Configure AWS Backup for DR.
C, D DISCUSSION: The correct answers are C and D. Option C addresses high availability by deploying the production system across two Availability Zones in the primary region and disaster recovery by deploying the DR system in a separate region. Option D uses Amazon EFS, the correct service for a shared file system, ensuring data consistency between the production and DR environments. Option A is incorrect because it doesn't provide sufficient geographic separation for disaster recovery. Option B is incorrect because it places both systems in the same region. Option E is incorrect because EBS is not suitable for shared file systems; EFS is better suited for this purpose. AWS Backup alone in option E wouldn't meet the RTO and RPO requirements specified.
46
A company is running SAP HANA as the database for all its SAP systems on AWS. The company has a production SAP landscape and a non-production SAP landscape in the same VPC. The company has deployed AWS Backint Agent for SAP HANA (AWS Backint agent) to store backups in an S3 bucket. The S3 bucket is encrypted and is configured with an S3 Lifecycle management policy that moves backup data that is older than 3 days to the S3 Glacier Flexible Retrieval storage class. An SAP engineer needs to perform a system copy by restoring the previous week's full backup of the production SAP HANA instance to the non-production SAP HANA instance. Which combination of steps must the SAP engineer take before the SAP engineer initiates the restoration procedure? (Choose two.) A. Update the AWS Backint agent configuration file of the non-production SAP HANA instance with the details of the AWS Backint agent configuration of the production instance. B. Move the database backup files from the S3 Glacier Flexible Retrieval storage class to the S3 Standard storage class. C. Reset the default encryption behavior of the S3 bucket to use S3 managed encryption keys. D. Update the AWS Backint agent to the most recent version. E. Update the SAP HANA database to the most recent supported version.
A, B
47
A company is planning to migrate its on-premises production SAP HANA system to AWS. The company uses a SUSE Linux Enterprise High Availability Extension two-node cluster to protect the system against failure. The company wants to use the same solution to provide high availability for the landscape on AWS. Which combination of prerequisites must the company fulfill to meet this requirement? (Choose two.) A. Use instance tags to identify the instances in the cluster. B. On the cluster, configure an overlay IP address that is outside the VPC CIDR range to access the active instance. C. On the cluster, configure an overlay IP address that is within the VPC CIDR range to access the active instance. D. On the cluster, configure an Elastic IP address that is outside the VPC CIDR range to access the active instance. E. On the cluster, configure an Elastic IP address that is within the VPC CIDR range to access the active instance.
A and B The correct answer is A and B because: A: Instance tagging is crucial for identifying the instances within the SUSE Linux Enterprise High Availability Extension cluster on AWS. This allows the HA solution to correctly manage and monitor the cluster nodes. The provided link in the discussion supports this. B: An overlay IP address outside the VPC CIDR range is necessary for accessing the active instance in the cluster. This avoids conflicts within the VPC's internal addressing scheme. The discussion provides a link confirming this. Options C and E are incorrect because using an IP address *within* the VPC CIDR range for HA cluster communication is not best practice, and can lead to conflicts and complicate management. Options D is incorrect because Elastic IPs are not typically used for intra-cluster communication in this scenario. An overlay IP address provides better control and flexibility within the HA cluster.
48
A company wants to deploy its SAP S/4HANA workload on AWS. The company will need to deploy additional SAP S/4HANA systems during the next year to meet the demands of planned projects. The company wants to adopt a DevOps model for deployment of additional SAP S/4HANA systems. The company’s project team needs to be able to provision new SAP S/4HANA systems with minimum user inputs. An SAP solutions architect must design a solution that can automate most of the implementation tasks. The solution must allow project team members to implement additional SAP S/4HANA systems with minimum required authorizations. Which solution will meet these requirements with the LEAST operational overhead? A. Deploy an SAP S/4HANA system by using AWS Launch Wizard for SAP. Create an AWS Service Catalog product. Authorize the project team to use the AWS Service Catalog product for future deployments of additional SAP S/4HANA systems. B. Provision an Amazon EC2 instance by using an AWS CloudFormation template. Use SAP Software Provisioning Manager to install an SAP S/4HANA system on the EC2 instance to create a base image. Create an Amazon Elastic Block Store (Amazon EBS) snapshot of the SAP S/4HANA system. Create an AWS Service Catalog product for the EC2 instance launch and the EBS snapshot restore. Authorize the project team to use AWS Service Catalog to launch additional EC2 Instances and restore EBS snapshots to new SAP S/4HANA instances. C. Create a base SAP S/4HANA system on an Amazon EC2 instance by using SAP Software Provisioning Manager. Create a custom AMI from the installed SAP S/4HANA base system. Use the custom AMI for future deployments of additional SAP S/4HANA systems. D. Provision an Amazon EC2 instance by using an AWS CloudFormation template. Use SAP Software Provisioning Manager to install an SAP S/4HANA system on the EC2 instance to create a base image. Create a custom AMI from the SAP S/4HANA system. Create an AWS Service Catalog product for the EC2 instance launch and the custom AMI restore. Authorize the project team to use AWS Service Catalog to launch additional SAP S/4HANA instances.
A The best answer is A. Using AWS Launch Wizard with AWS Service Catalog provides a streamlined, automated process for deploying SAP S/4HANA systems. This minimizes manual intervention and operational overhead. The Service Catalog allows for controlled access for the project team without granting excessive permissions. Option B is less efficient because it involves creating and managing EBS snapshots, adding extra steps compared to the more integrated Launch Wizard approach. Options C and D use custom AMIs which require more manual steps to create and manage compared to the automated approach of A. They also lack the fine-grained access control provided by Service Catalog.
49
A company migrated its SAP ERP Central Component (SAP ECC) environment to an m4.large Amazon EC2 instance (Xen based) in 2016. The company changed the instance type to m5.xlarge (KVM based). Since the change, users are receiving a pop-up box that indicates that the SAP license will expire soon. What could be the cause of this issue? A. The change from the Xen-based m4.large instance type to the KVM-based m5.xlarge instance type is not allowed. B. The Xen-based m4.large instance was running with a lower kernel patch level (SAP Kernel 7.49 Patch Level 401). When the change to a KVM-based instance occurred, the hardware key changed. The instance requires a new license. C. The Xen-based m4.large instance was running with a higher kernel patch level (SAP Kernel 7.49 Patch Level 500). When the change to a KVM-based instance occurred, the hardware key changed. The instance requires a new license. D. Whenever an instance type changes, the change requires a new license.
B The correct answer is B because changing the instance type from Xen-based to KVM-based can change the hardware key, especially if the SAP kernel patch level is low (as in option B). A lower patch level often means older, less compatible components, making a hardware key change more likely during a virtualization type switch. This necessitates a new SAP license. Option A is incorrect because changing instance types between Xen and KVM is possible and frequently done. Option C is less likely because a higher patch level might be more compatible; however, a hardware key change is still possible in this scenario. Option D is incorrect because an instance type change does not automatically require a new license. The trigger is the hardware key change associated with a low patch level and the virtualization platform change.
50
A company runs a three-system SAP S/4HANA landscape on Amazon EC2 instances. The landscape includes a development system, a QA system, and a production system. Each system runs on its own EC2 instance. The production instance hosts a critical system that must run 24 hours a day, 7 days a week. The development instance and the QA instance need to run only during business hours and can be stopped for the rest of the day. An SAP administrator plans to use AWS Systems Manager to implement an automated start-stop solution for the development instance and the QA instance. When the SAP administrator attempts to deploy the solution, the SAP administrator cannot find any SAP S/4HANA systems in Systems Manager. Which options are possible causes of this problem? (Choose two.) A. An appropriate instance profile that contains the AmazonSSMManagedInstanceCore policy is not assigned to the EC2 instances. B. The EC2 instances are attached to a security group that has an outbound rule that does not explicitly allow port 443. C. Systems Manager Agent (SSM Agent) is not installed on the EC2 instances. D. The AWS Data Provider for SAP agent is not installed on the EC2 instances. E. Amazon CloudWatch detailed monitoring is not turned on for the EC2 instances.
A, C
51
A company deploys SAP non-production systems on AWS using the standard installation model in a single Availability Zone. Amazon Elastic File System (Amazon EFS) hosts SAP file systems such as `/sapmnt` and `/usr/sap/trans`. The required Amazon EC2 instances are launched, but the EFS file systems cannot be mounted. An SAP engineer must adjust the security groups associated with the EC2 instances and EFS file systems to allow traffic between them. Which two steps should the engineer take? A. Configure the security groups associated with the EFS file systems to allow inbound access for the TCP protocol on the NFS port (TCP 2049) from all EC2 instances where the file systems are mounted. B. Configure the security groups associated with the EFS file systems to allow outbound access for the TCP protocol on the NFS port (TCP 2049) from all EC2 instances where the file systems are mounted. C. Configure the security groups associated with the EFS file systems to allow outbound access from the security group of the corresponding EC2 instances on the NFS port (TCP 2049). D. Configure the security groups associated with the EC2 instances to allow inbound access to the EFS file systems on the NFS port (TCP 2049). E. Configure the security groups associated with the EC2 instances to allow outbound access to the EFS file systems on the NFS port (TCP 2049).
A and E The correct answer is A and E because: * **A:** The EFS needs to allow inbound traffic on port 2049 (NFS) from the EC2 instances to receive mount requests. This is necessary for the EC2 instances to connect to and use the EFS. * **E:** While security groups are stateful, explicitly allowing outbound traffic from the EC2 instances to the EFS on port 2049 is best practice to ensure connectivity. Although the return traffic might be implicitly allowed, explicitly defining the outbound rule provides clarity and better control over network traffic. Option B is incorrect because EFS doesn't initiate connections; it responds to requests from the EC2 instances. Option C is incorrect because it focuses on outbound access from the EFS perspective, which is not required. Option D is incorrect because while inbound access on the EC2 instances might seem relevant, it is the outbound connection initiated by the EC2 instance that is crucial for mounting the EFS.
52
A company wants to implement a highly available SAP S/4HANA workload on AWS with automatic failover. The company also needs a cross-Region disaster recovery (DR) solution for the SAP S/4HANA production system. The company has a required RPO of up to 15 minutes and a required RTO of up to 120 minutes for the DR solution. Which solution will meet these requirements MOST cost-effectively? A. Deploy two identically sized SAP S/4HANA systems, each in a different Availability Zone in the primary AWS Region. Set up synchronous SAP HANA system replication with preload between the systems. Set up a pilot light DR system with asynchronous SAP HANA system replication without preload to the secondary Region. B. Deploy two identically sized SAP S/4HANA systems, each in a different Availability Zone in the primary AWS Region. Set up synchronous SAP HANA system replication with preload between the systems. Set up a full-size DR system with asynchronous SAP HANA system replication without preload to the secondary Region. C. Deploy two SAP S/4HANA systems, each in a different Availability Zone in the primary AWS Region. Use a smaller size for the secondary system. Set up synchronous SAP HANA system replication without preload between the systems. Set up a pilot light DR system with asynchronous SAP HANA system replication to the secondary Region. D. Deploy two identically sized SAP S/4HANA systems, each in a different Availability Zone in the primary AWS Region. Set up synchronous SAP HANA system replication with preload between the systems. For DR, set up S3 Cross-Region Replication (CRR) for SAP HANA backup files from the primary Region to the secondary Region.
A A is correct because it uses a pilot light DR system, which is the most cost-effective approach for meeting the specified RPO and RTO requirements. A pilot light system only requires a small, minimally provisioned system in the DR region, significantly reducing costs compared to a full-sized system (option B). Options C and D do not offer the same cost-effectiveness and speed of recovery as A. Option C uses a smaller secondary system in the primary region, which doesn't address the cross-region DR requirement efficiently. Option D, using S3 CRR for backups, would have a significantly longer RTO than the required 120 minutes.
53
A company wants to migrate its SAP S/4HANA infrastructure to AWS. The infrastructure includes production, pre-production, test, and development environments. The pre-production environment is an identical copy of the production environment. The production system must comply with a new policy that requires the landscape to be able to fail over to a secondary AWS Region. The required RPO is 5 minutes. The required RTO is 4 hours. The estimated SAP HANA database size is 6 TB. Which solution will meet these requirements MOST cost-effectively? A. Deploy the pre-production environment in a primary Region. Deploy the other environments in a secondary Region. Configure the disaster recovery SAP HANA system on the pre-production hardware. Implement replication by setting the preload_column_tables parameter to false. Before failover, stop the pre-production environment, set the preload_column_tables parameter to true, and allocate the memory for production takeover. B. Deploy all environments in a primary Region. Configure a 500 GB disaster recovery (DR) site in a secondary Region. Configure DR SAP HANA system replication on the pre-production hardware by setting the preload_column_tables parameter to false. In the event of a disaster, resize the DR environment to 6 TB, set the preload_column_tables parameter to true, and perform a takeover. C. Deploy all environments in a primary Region. Configure a 6 TB disaster recovery (DR) site in a secondary Region. In the event of a disaster, perform a takeover on the DR site. D. Deploy all environments in a primary Region. Configure a 6 TB disaster recovery (DR) site in the same Region. In the event of a disaster, perform a takeover on the DR site.
A The discussion highlights option A as the most cost-effective solution. Options B, C, and D are incorrect for the following reasons: * **B:** This option is not cost-effective because it requires resizing the DR environment from 500 GB to 6 TB after a disaster, leading to significant downtime and potential cost overruns. The small initial size also makes meeting the 5-minute RPO highly unlikely. * **C:** While this option meets the RPO and RTO requirements, it's less cost-effective than A due to the provisioning of a full 6 TB DR environment, which is continuously idle. * **D:** This option fails to meet the requirement of failing over to a secondary AWS Region, a key policy stipulation.
54
A company is running its SAP system on AWS with a secondary SAP HANA database in a sidecar setup. The company requires high IOPS for write performance on its Amazon Elastic Block Store (Amazon EBS) volumes for the secondary SAP HANA database. The EBS volume currently used for the SAP HANA data volume cannot provide the required IOPS. Instance bandwidth for the Amazon EC2 instance hosting the SAP HANA database is sufficient. An SAP solutions architect needs to propose a solution to resolve the IOPS performance issue. Which solution will achieve the required IOPS? A. Replace the EBS storage with EC2 instance store storage. B. Create a RAID 0 configuration with several EBS volumes. C. Use Amazon EC2 Auto Scaling to launch Spot Instances. D. Create a placement group with several EBS volumes.
B The correct answer is B because creating a RAID 0 configuration with several EBS volumes allows for striping, which significantly increases IOPS. This directly addresses the problem of insufficient IOPS from a single EBS volume. A is incorrect because instance store is volatile and unsuitable for persistent database storage. C is irrelevant to IOPS; auto-scaling addresses availability and scalability, not storage performance. D is incorrect because placement groups improve network latency between instances, not storage IOPS.
55
A company hosts its SAP applications and database applications on Amazon EC2 instances in private subnets distributed across two Availability Zones. Public subnets exist in each Availability Zone for public applications. An SAP solutions architect needs a highly available solution with minimal maintenance to download software patches from the internet to these EC2 instances. Which solution best meets these requirements? A. Provision one NAT instance in the public subnet of each Availability Zone. In the route table for each private subnet, add an entry that points to the NAT instance. B. Provision one NAT gateway in the public subnet of each Availability Zone. In the route table for each public subnet, add an entry that points to the NAT gateway. C. Provision one NAT gateway in the public subnet of each Availability Zone. In the route table for each private subnet, add an entry that points to the NAT gateway. D. Provision one NAT instance in the public subnet of a third Availability Zone. In the route table for each public subnet, add an entry that points to the NAT instance in the third Availability Zone.
C The correct answer is C because it provides a highly available and low-maintenance solution. Using NAT Gateways eliminates the need for managing and maintaining individual NAT instances, which reduces administrative overhead. Placing a NAT Gateway in each Availability Zone ensures high availability, as the failure of one AZ won't impact the other. Routing from the private subnets to the NAT Gateways in their respective Availability Zones is the correct configuration for accessing the internet. Option A is incorrect because while it provides high availability across AZs, using NAT instances requires more management and maintenance compared to NAT Gateways. Option B is incorrect because it routes the public subnet traffic through the NAT Gateway, which is unnecessary and inefficient. Option D is incorrect because it introduces a single point of failure by centralizing the NAT instance in a third Availability Zone. If that AZ fails, internet access for all private instances will be disrupted.
56
A company is planning to migrate its SAP Business Warehouse (SAP BW) 7.5 system on SAP HANA from on premises to AWS. The production database is 4 TB in size and has a scale-out architecture that consists of three nodes. Each node has 2 TB of memory. The company needs to keep the three SAP HANA nodes in the target architecture. Which solution on AWS will provide the HIGHEST throughput for the SAP HANA database? A. Implement SAP HANA scale-out Amazon EC2 instances with default tenancy. B. Implement SAP HANA scale-out Amazon EC2 instances with Capacity Reservations in a cluster placement group. C. Implement SAP HANA scale-out Amazon EC2 instances in a spread placement group. D. Implement SAP HANA scale-out Amazon EC2 instances in a partition placement group.
B The best option is B because placing the SAP HANA instances in a cluster placement group ensures they are located within the same high-bandwidth segment of the network, resulting in higher TCP/IP throughput between the nodes. While ENA Express offers high bandwidth, it doesn't guarantee the same level of optimized network performance between instances as a cluster placement group. Capacity Reservations further enhance performance predictability and resource availability. Option A uses default tenancy, which doesn't optimize network placement. Options C and D are designed for different purposes; spread placement groups distribute instances across multiple availability zones for high availability, while partition placement groups isolate instances for high-performance computing tasks which isn't the primary goal here. The question focuses on maximizing throughput between the three SAP HANA nodes within a single availability zone.
57
A company is planning to migrate its SAP Content Server from on premises to Amazon EC2 instances. The SAP Content Server stores data in a MaxDB database. The on-premises servers run the SUSE Linux Enterprise Server operating system. The company wants to assess the benefits of cloud deployment by performing a proof of concept. An SAP solutions architect needs to perform a rehosting of the SAP Content Server on AWS to provide highly available and resilient storage. Which solutions will meet these requirements? (Choose two.) A. Configure Amazon Elastic File System (Amazon EFS) file systems for the MaxDB permanent storage. Install the nfs-utils package on the EC2 instances. Create the necessary mounts to attach the EFS file systems to the EC2 instances. B. Configure Amazon FSx for Lustre file systems for the MaxDB permanent storage. Create the necessary mounts to attach the FSx for Lustre file systems to the EC2 instances. Update the /etc/fstab file with the directory name, DNS name, and mount name. C. Configure General Purpose SSD (gp2 or gp3) or Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volumes for the MaxDB permanent storage. Use the aws ec2 attach-volume AWS CLI command with device, volume ID, and instance ID to attach the mount to each EC2 instance. D. Configure Amazon S3 buckets for the MaxDB permanent storage. Create an IAM instance profile that specifies a role to grant access to Amazon S3. Attach the instance profile to the EC2 instances. E. Configure Amazon Elastic Container Service (Amazon ECS) volumes for the MaxDB permanent storage. Install the nfs-utils package on the EC2 instances. Create the necessary mounts to attach the ECS volumes to the EC2 instances.
A, C
58
A company is running SAP ERP Central Component (SAP ECC) on SAP HANA on premises. The current landscape runs on four application servers that use an SAP HANA database. The company is migrating this environment to the AWS Cloud. The cloud environment must minimize downtime during business operations and must not allow inbound access from the internet. Which solution will meet these requirements? A. Design a Multi-AZ solution. In each Availability Zone, create a private subnet where Amazon EC2 instances that host the SAP HANA database and the application servers will reside. Use EC2 instances that are the same size to host the primary database and the secondary database. Use SAP HANA system replication in synchronous replication mode. B. Design a Single-AZ solution. Create a private subnet where a single SAP HANA database and application servers will run on Amazon EC2 instances. C. Design a Multi-AZ solution. In each Availability Zone, create a private subnet where Amazon EC2 instances that host the SAP HANA database and the application servers will reside. Shut down the EC2 instance that runs the secondary database node. Turn on this EC2 instance only when the primary database node or the primary database node's underlying EC2 instance is unavailable. D. Design a Single-AZ solution. Create two public subnets where Amazon EC2 instances that host the SAP HANA database and the application servers will reside as two separate instances. Use EC2 instances that are the same size to host the primary database and the secondary database. Use SAP HANA system replication in synchronous replication mode.
A The correct answer is A because it addresses both requirements: minimizing downtime and preventing inbound internet access. A Multi-AZ solution with synchronous replication ensures high availability by replicating data across Availability Zones, minimizing downtime in case of an AZ failure. Using private subnets prevents inbound internet access, enhancing security. Option B is incorrect because a Single-AZ solution offers no redundancy and is vulnerable to outages. Option C is incorrect because it describes a manual failover mechanism, which is not efficient and increases downtime. Option D is incorrect because it uses public subnets, exposing the system to the internet.
59
A company is running an SAP Commerce application in a development environment and is ready to deploy it to a production environment on AWS. The production application is expected to experience a significant increase in transactions during sales and promotions. The application's database needs to automatically scale storage, CPU, and memory to minimize costs during low-demand periods while maintaining high availability and performance during high-demand periods. Which solution best meets these requirements? A. Use an SAP HANA single-node deployment that runs on burstable performance Amazon EC2 instances. B. Use an Amazon Aurora MySQL database that runs on serverless DB instance types. C. Use a HyperSQL database that runs on Amazon Elastic Container Service (Amazon ECS) containers with ECS Service Auto Scaling. D. Use an Amazon RDS for MySQL DB cluster that consists of high memory DB instance types.
B B is the correct answer because Amazon Aurora Serverless automatically scales the database resources (storage, CPU, and memory) based on the workload. This directly addresses the requirement for automatic scaling to minimize costs during low-demand periods and maintain performance during high-demand periods. A is incorrect because while burstable performance EC2 instances offer some cost savings, they don't provide the automatic scaling needed for the database. SAP HANA single-node deployments also lack the inherent scalability of Aurora Serverless. C is incorrect because while ECS Service Auto Scaling can scale the number of containers, it doesn't directly manage the database's internal scaling of resources. Furthermore, HyperSQL is not a commonly used database for SAP Commerce, and this approach lacks the managed service benefits of Aurora. D is incorrect because while an RDS for MySQL DB cluster offers high availability and the ability to scale by adding instances, it requires manual intervention to scale resources, unlike the automatic scaling of Aurora Serverless. It also doesn't inherently minimize costs during low-demand periods as effectively as Aurora Serverless.
60
A company plans to migrate its SAP NetWeaver environment from its on-premises data center to AWS. An SAP solutions architect needs to deploy the AWS resources for an SAP S/4HANA-based system in a Multi-AZ configuration without manually identifying and provisioning individual AWS resources. The SAP solutions architect's task includes the sizing, configuration, and deployment of the SAP S/4HANA system. What is the QUICKEST way to provision the SAP S/4HANA landscape on AWS to meet these requirements? A. Use the SAP HANA Quick Start reference deployment. B. Use AWS Launch Wizard for SAP. C. Create AWS CloudFormation templates to automate the deployment. D. Manually deploy SAP HANA on AWS.
B The correct answer is B because the AWS Launch Wizard for SAP is specifically designed to quickly and easily deploy SAP systems on AWS, including S/4HANA, in a Multi-AZ configuration. It handles the sizing, configuration, and deployment of the necessary AWS resources, eliminating the need for manual provisioning. Option A, while useful, is not as quick as a dedicated tool for SAP deployments. Option C, using CloudFormation, requires creating and testing templates, which takes more time than using a pre-built tool like the Launch Wizard. Option D, manual deployment, is the slowest and most error-prone method.
61
A company is running its on-premises SAP ECC production workload on SUSE Linux Enterprise Server. The SAP ECC workload uses an Oracle database that has 20 TB of data. The company needs to migrate the SAP ECC workload to AWS with no change in database technology. The company must minimize production system downtime. Which solution will meet these requirements? A. Migrate the SAP ECC workload to AWS by using AWS Application Migration Service. B. Install SAP ECC application instances on SUSE Linux Enterprise Server. Use AWS Database Migration Service (AWS DMS) to migrate the Oracle database to Amazon RDS for Oracle. C. Migrate the SAP ECC workload to AWS by using SAP Software Provisioning Manager on Oracle Enterprise Linux. D. Install SAP ECC with an Oracle database on Oracle Enterprise Linux. Perform the migration by using Oracle Cross-Platform Transportable Tablespace (XTTS).
D The correct answer is D because using Oracle Cross-Platform Transportable Tablespaces (XTTS) minimizes downtime during migration. Options A and B are incorrect because they don't offer the minimal downtime required. Option C is incorrect because it will result in unacceptable downtime.
62
A company is planning to implement a new SAP workload on SUSE Linux Enterprise Server on AWS. The company needs to use AWS Key Management Service (AWS KMS) to encrypt every file at rest. The company also requires that its production SAP workloads and non-production SAP workloads are separated into different AWS accounts. The production account and the non-production account share a common SAP transport directory, `/usr/sap/trans`. The two accounts are connected by VPC peering. What should the company do to achieve the data encryption at rest for the new SAP workload? A. Create an asymmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Import the KMS key into the non-production account to allow the production systems to access the SAP transport directory. B. Create a symmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Create an IAM role in the non-production account and a key policy for the KMS key in the production account to allow the non-production systems to access the SAP transport directory. C. Create a symmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Create an IAM role in the production account and a key policy for the KMS key in the production account to allow the non-production systems to access the SAP transport directory. D. Create an asymmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Create an IAM role in the non-production account and a key policy for the KMS key in the production account to allow the non-production systems to access the SAP transport directory.
B The correct answer is B because it correctly addresses cross-account access to the KMS key. Cross-account access requires both a key policy in the production account (where the key resides) and an IAM role in the non-production account to grant the necessary permissions. Options A, C, and D are incorrect because they either incorrectly suggest importing the KMS key (A), don't create the necessary IAM role in the non-production account (C), or use an asymmetric key which is unnecessary for this scenario (A and D). Symmetric keys are generally simpler and more efficient for this use case.
63
A company is deploying SAP Business Suite on SAP HANA by using two Amazon EC2 bare metal instances. The company has set up a Pacemaker cluster for SAP HANA. The cluster is set up between the two instances, which are configured to use SAP HANA system replication. An SAP engineer notices that the overlay IP address is not reachable from the application servers. The overlay IP address is only reachable locally on the database cluster. Which actions should the SAP engineer take to resolve this issue? (Choose three.) A. Turn off the source/destination check on each bare metal instance. B. Modify the security groups to ensure that the minimal ports for connectivity between the application server and the database are opened. C. Add a route table entry to the route tables for the subnets of both bare metal instances for the overlay IP address. D. Ensure that both bare metal instances are in the same subnet. E. Perform a failover and tailback by using the Pacemaker cluster. Check whether the overlay IP address routing is functioning correctly. F. Move the Pacemaker cluster to EC2 VM instances instead of bare metal instances.
A, B, C The correct answer is A, B, and C because: A: Disabling source/destination checks allows traffic to be routed correctly even if it originates from or is destined for an IP address that's not directly on the interface. This is often necessary for complex networking setups like Pacemaker clusters. B: Security groups act as firewalls. If the necessary ports for database communication aren't open in the security group rules, the application servers won't be able to reach the database. C: If the application servers and the database cluster are in different subnets, a route table entry is needed to direct traffic to the overlay IP address. Without this, the traffic won't know how to reach the database. D is incorrect: While it's good practice to keep related resources in the same subnet when possible, it's not strictly necessary and doesn't directly address the issue of the overlay IP not being reachable. This would only avoid a routing issue but not fix the existing one. E is incorrect: A failover and tailback test the HA configuration. While this is useful for general troubleshooting, it's unlikely to directly resolve an IP address routing problem. F is incorrect: The choice of bare metal vs. VM instances is irrelevant to the specific routing problem at hand.
64
A company is deploying SAP landscapes in a single AWS account using separate VPCs for production and non-production environments. An Amazon Elastic File System (Amazon EFS) file system hosts the SAP transport file systems. During an automated SAP deployment of the production environment using AWS Launch Wizard for SAP, a failure occurs when reusing the SAP transport directory share from the non-production environment. This failure did not occur in previous non-production deployments. The SAP engineer needs to complete the deployment without incurring additional costs for SAP transport directories. What should the SAP engineer do? A. Perform a manual deployment. B. Set up a new SAP transport directory for the production environment. Copy all files from the non-production transport host into the production transport directory using rsync. Continue to use separate SAP transport directories for the systems. C. Set up a transit gateway or direct VPC peering to make communication possible between the production VPC and the non-production VPC. D. Skip the SAP transport directories step to complete the deployment.
C The best solution is C because it enables communication between the production and non-production VPCs without creating additional SAP transport directories (thus avoiding extra costs). While a transit gateway incurs hourly charges, direct VPC peering within the same Availability Zone is free for data transfer (as of May 2021). Options A and B introduce additional work and/or costs, and option D is likely to result in a failed deployment as the root cause of the problem is the inability to access the transport directory across VPCs.
65
A company is running its on-premises SAP ERP Central Component (SAP ECC) workload on SAP HANA. The company wants to perform SAP S/4HANA conversion of the on-premises SAP ECC on SAP HANA landscape and migrate to AWS. Which solutions can the company use to meet these requirements? (Choose two.) A. Perform SAP S/4HANA conversion of the SAP ECC on SAP HANA system by using SAP Software Update Manager (SUM). Migrate to AWS by using SAP Software Provisioning Manager. B. Perform SAP S/4HANA conversion and migration of the SAP ECC on SAP HANA system to AWS by using SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move. C. Perform migration of the SAP ECC on SAP HANA system to AWS by using SAP HANA system replication for database migration and AWS Application Migration Service for migration of the SAP ECC application instances. Perform SAP S/4HANA conversion by using SAP Software Update Manager (SUM). D. Perform SAP S/4HANA conversion of the SAP ECC on SAP HANA system by using SAP Software Provisioning Manager. Migrate to AWS by using AWS Application Migration Service. E. Perform SAP S/4HANA conversion of the SAP ECC on SAP HANA system by using SAP Software Update Manager (SUM). Migrate the database to AWS by using AWS Database Migration Service (AWS DMS). Deploy SAP S/4HANA application instances.
B, C
66
A company hosts its SAP NetWeaver workload on SAP HANA in the AWS Cloud. The SAP NetWeaver application is protected by a cluster solution that uses Red Hat Enterprise Linux High Availability Add-On. The cluster solution uses an overlay IP address to ensure that the high availability cluster is still accessible during failover scenarios. An SAP solutions architect needs to facilitate the network connection to this overlay IP address from multiple locations. These locations include more than 25 VPCs, other AWS Regions, and the on-premises environment. The company already has set up an AWS Direct Connect connection between the on-premises environment and AWS. What should the SAP solutions architect do to meet these requirements in the MOST scalable manner? A. Use VPC peering between the VPCs to route traffic between them. B. Use AWS Transit Gateway to connect the VPCs and on-premises networks together. C. Use a Network Load Balancer to route connections to various targets within VPCs. D. Deploy a Direct Connect gateway to connect the Direct Connect connection over a private VIF to one or more VPCs in any accounts.
B The correct answer is B because AWS Transit Gateway is designed for connecting large numbers of VPCs and on-premises networks in a highly scalable and manageable way. It avoids the complexity and limitations of VPC peering for many VPCs and provides a central point of connectivity for all locations including different AWS Regions and the on-premises environment via the existing Direct Connect connection. Option A (VPC peering) is not scalable for more than 25 VPCs and becomes complex to manage. Option C (Network Load Balancer) is for distributing traffic across multiple instances within a VPC, not for connecting different VPCs and on-premises environments. Option D (Direct Connect gateway and private VIFs) would require a large number of VIFs, making it less scalable and more complex than using Transit Gateway.
67
A global retail company is running its SAP landscape on AWS. Recently, the company added an additional SAP Web Dispatcher for high availability, using an Application Load Balancer (ALB) to balance the load between the two. When users access SAP through the ALB, the system is reachable, but the SAP backend system shows an error related to session handling and request distribution. The system worked correctly with a single SAP Web Dispatcher. The configuration of the original SAP Web Dispatcher was replicated on the new one. How can the company resolve this error? A. Maintain persistence by using session cookies. Enable session stickiness (session affinity) on the SAP Web Dispatchers by setting the `wdisp/HTTP/esid_support` parameter to True. B. Maintain persistence by using session cookies. Enable session stickiness (session affinity) on the ALB. C. Turn on host-based routing on the ALB to route traffic between the SAP Web Dispatchers. D. Turn on URL-based routing on the ALB to route traffic to the application based on URL.
B The correct answer is B because the problem stems from the ALB distributing traffic between the two Web Dispatchers, disrupting session persistence. Enabling session stickiness (affinity) on the ALB ensures that requests from a single user are consistently routed to the same Web Dispatcher, maintaining session continuity. Option A is incorrect because configuring session stickiness on the Web Dispatchers themselves won't solve the problem if the ALB is already distributing requests inconsistently between them. Option C and D are irrelevant to the session management issue; they concern traffic routing between the Web Dispatchers rather than persistent session handling for individual users.
68
A company is implementing SAP HANA on AWS. According to the company’s security policy, SAP backups must be encrypted, and only authorized team members can decrypt them. What is the MOST operationally efficient solution that meets these requirements? A. Configure AWS Backint Agent for SAP HANA to create SAP backups in an Amazon S3 bucket. After a backup is created, encrypt the backup by using client-side encryption. Share the encryption key with authorized team members only. B. Configure AWS Backint Agent for SAP HANA to use AWS Key Management Service (AWS KMS) for SAP backups. Create a key policy to grant decryption permission to authorized team members only. C. Configure AWS Storage Gateway to transfer SAP backups from a file system to an Amazon S3 bucket. Use an S3 bucket policy to grant decryption permission to authorized team members only. D. Configure AWS Backint Agent for SAP HANA to use AWS Key Management Service (AWS KMS) for SAP backups. Grant object ACL decryption permission to authorized team members only.
B The most operationally efficient solution is B because it leverages AWS KMS for server-side encryption, which is inherently more secure and efficient than client-side encryption (option A). Option C uses AWS Storage Gateway, adding an unnecessary layer of complexity. Option D is incorrect because it attempts to manage decryption permissions using object ACLs, which is not the intended use of object ACLs and is less efficient and secure than managing permissions through KMS key policies. Option B directly integrates encryption and access control management within the backup process using the recommended AWS service for key management.
69
A data analysis company has two SAP landscapes (one on Windows, the other on Red Hat Enterprise Linux) with sandbox, development, QA, pre-production, and production servers located in a shared building. An SAP solutions architect proposes migrating production backups to AWS for high availability. Which solution offers the MOST cost-effective high availability for these backups? A. Take a backup of the production servers. Implement an AWS Storage Gateway Volume Gateway. Create file shares using the Storage Gateway Volume Gateway. Copy the backup files to the file shares through NFS and SMB. B. Take a backup of the production servers. Send those backups to tape drives. Implement an AWS Storage Gateway Tape Gateway. Send the backups to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) through the S3 console. Move the backups immediately to S3 Glacier Deep Archive. C. Implement a third-party tool to take images of the SAP application servers and database server. Take regular snapshots at 1-hour intervals. Send the snapshots to Amazon S3 Glacier directly through the S3 Glacier console. Store the same images in different S3 buckets in different AWS Regions. D. Take a backup of the production servers. Implement an Amazon S3 File Gateway. Create file shares using the S3 File Gateway. Copy the backup files to the file shares through NFS and SMB. Map backup files directly to Amazon S3. Configure an S3 Lifecycle policy to send the backup files to S3 Glacier based on the company’s data retention policy.
D The discussion strongly indicates that option D is the most cost-effective solution. It leverages S3, a highly available and durable storage service, and uses an S3 lifecycle policy to automatically move less frequently accessed data to the cheaper S3 Glacier storage tier based on the company's retention policy. This approach balances accessibility and cost. Option A uses Storage Gateway Volume Gateway, which adds cost and complexity compared to the direct integration of option D. Option B is inefficient, involving tape drives and immediate archival to Glacier Deep Archive, which makes retrieval costly and slow. Option C is also costly due to frequent hourly snapshots directly to Glacier, bypassing the cost-effective tiered storage approach of S3 and S3 Glacier. Storing the same images in multiple regions is also redundant and increases cost unnecessarily unless a specific geographic disaster recovery strategy is needed.
70
A company is running an SAP ERP Central Component (SAP ECC) system on an SAP HANA database that is 10 TB in size. The company is receiving notifications about long-running database backups every day. The company uses AWS Backint Agent for SAP HANA (AWS Backint agent) on an Amazon EC2 instance to back up the database. An SAP NetWeaver administrator needs to troubleshoot the problem and propose a solution. Which solution will help resolve this problem? A. Ensure that AWS Backint agent is configured to send the backups to an Amazon S3 bucket over the internet. Ensure that the EC2 instance is configured to access the internet through a NAT gateway. B. Check the UploadChannelSize parameter for AWS Backint agent. Increase this value in the aws-backint-agent-config.yaml configuration file based on the EC2 instance type and storage configurations. C. Check the MaximumConcurrentFilesForRestore parameter for AWS Backint agent. Increase the parameter from 5 to 10 by using the aws-backint-agent-config.yaml configuration file. D. Ensure that the backups are compressed. If necessary, configure AWS Backint agent to compress the backups and send them to an Amazon S3 bucket.
B The correct answer is B because the provided discussion explicitly states that a root cause of long-running backups is high throughput in the connection between the AWS Backint agent and S3. The solution is to adjust the `UploadChannelSize` parameter in the configuration file. Increasing this value, based on the EC2 instance and storage, can improve the backup speed. Option A is incorrect because network connectivity is assumed to already be working (the problem is slow transfer speed, not connectivity). Options C and D are incorrect because they address restore parameters and compression, respectively, neither of which directly addresses the problem of slow upload speeds due to high throughput.
71
An SAP solutions architect is designing an SAP HANA scale-out architecture for SAP Business Warehouse (SAP BW) on SAP HANA on AWS. The SAP solutions architect identifies the design as a three-node scale-out deployment of xte.32xiarge Amazon EC2 instances. The SAP solutions architect must ensure that the SAP HANA scale-out nodes can achieve the low-latency and high-throughput network performance that are necessary for node-to-node communication. Which combination of steps should the SAP solutions architect take to meet these requirements? (Choose two.) A. Create a cluster placement group. Launch the instances into the cluster placement group. B. Create a spread placement group. Launch the instances into the spread placement group. C. Create a partition placement group. Launch the instances into the partition placement group. D. Based on the operating system version, verify that enhanced networking is enabled on all the nodes. E. Switch to a different instance family that provides network throughput that is greater than 25 Gbps.
A and D A is correct because a cluster placement group ensures that instances are placed close together, minimizing network latency. D is correct because enhanced networking provides improved network performance compared to standard networking. B is incorrect because spread placement groups are designed to distribute instances across multiple availability zones, which would increase latency. C is incorrect because partition placement groups are used for specific workloads and not necessarily optimized for low-latency communication between nodes. E is incorrect because while increased network throughput is beneficial, it's less crucial than low latency for in-memory databases like SAP HANA, and the question doesn't specify that the current instance type doesn't meet the requirements. The existing infrastructure may be sufficient if properly configured with A and D.
72
A company needs to migrate its critical SAP workloads, including several 10TB+ production databases, from an on-premises data center to AWS with minimal downtime. During a proof of concept, a low-speed, high-latency connection was used. For the actual migration, the company requires high bandwidth, low latency, and connectivity resiliency. The backup connectivity does not need the same speed as the primary connection. Which network configuration best meets these requirements? A. Set up one AWS Direct Connect connection for connectivity between the on-premises data center and AWS. Add an AWS Site-to-Site VPN connection as a backup to the Direct Connect connection. B. Set up an AWS Direct Connect gateway with multiple Direct Connect connections that use a link aggregation group (LAG) between the on-premises data center and AWS. C. Set up Amazon Elastic File System (Amazon EFS) file system storage between the on-premises data center and AWS. Configure a cron job to copy the data into this EFS mount. Access the data in the EFS file system from the target environment. D. Set up two redundant AWS Site-to-Site VPN connections for connectivity between the on-premises data center and AWS.
A A is the correct answer because it provides a high-bandwidth, low-latency primary connection (Direct Connect) with a resilient backup (Site-to-Site VPN). This meets all stated requirements. B is incorrect because while LAGs can increase bandwidth, they don't inherently provide the backup connectivity required. Multiple Direct Connect connections without a LAG would be unnecessarily expensive. C is incorrect because EFS is not suitable for migrating large databases quickly and efficiently. The copying process via a cron job would be very slow and prone to interruptions. D is incorrect because Site-to-Site VPNs generally have lower bandwidth and higher latency than Direct Connect, making them unsuitable as the primary connection for migrating large databases. While redundancy is provided, the primary connection's performance would be inadequate.
73
A company wants to migrate its SAP ERP landscape to AWS. The company will use a highly available distributed deployment for the new architecture. Clients will access SAP systems from a local data center through an AWS Site-to-Site VPN connection that is already in place. An SAP solutions architect needs to design the network access to the SAP production environment. Which configuration approaches will meet these requirements? (Choose two.) A. For the ASCS instance, configure an overlay IP address that is within the production VPC CIDR range. Create an AWS Transit Gateway. Attach the VPN to the transit gateway. Use the transit gateway to route the communications between the local data center and the production VPC. Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance. B. For the ASCS instance, configure an overlay IP address that is outside the production VPC CIDR range. Create an AWS Transit Gateway. Attach the VPN to the transit gateway. Use the transit gateway to route the communications between the local data center and the production VPC. Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance. C. For the ASCS instance, configure an overlay IP address that is within the production VPC CIDR range. Create a target group that points to the overlay IP address. Create a Network Load Balancer, and register the target group. Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance. D. For the ASCS instance, configure an overlay IP address that is outside the production VPC CIDR range. Create a target group that points to the overlay IP address. Create a Network Load Balancer, and register the target group. Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance. E. For the ASCS instance, configure an overlay IP address that is outside the production VPC CIDR range. Create a target group that points to the overlay IP address. Create an Application Load Balancer, and register the target group. Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance.
B, D The correct answers are B and D. The discussion indicates that the overlay IP address for high availability of the ASCS instance should be outside the production VPC CIDR range. This eliminates options A and C. Further, a Network Load Balancer (NLB), not an Application Load Balancer (ALB), is appropriate for this scenario because NLB handles network traffic, while ALB is designed for HTTP traffic. This eliminates option E. Therefore, options B and D, which correctly specify an overlay IP address outside the VPC CIDR range and use a Network Load Balancer, are the correct choices.
74
A company is running an SAP HANA database on AWS. The company is running AWS Backint Agent for SAP HANA (AWS Backint agent) on an Amazon EC2 instance. AWS Backint agent is configured to back up to an Amazon S3 bucket. The backups are failing with an `AccessDenied` error in the AWS Backint agent log file. What should an SAP basis administrator do to resolve this error? A. Assign execute permissions at the operating system level for the AWS Backint agent binary and for AWS Backint agent. B. Assign an IAM role to an EC2 instance. Attach a policy to the IAM role to grant access to the target S3 bucket. C. Assign the correct Region ID for the `S3BucketAwsRegion` parameter in AWS Backint agent for the SAP HANA configuration file. D. Assign the value for the `EnableTagging` parameter in AWS Backint agent for the SAP HANA configuration file.
B The most likely cause of an `AccessDenied` error when backing up to an S3 bucket from an EC2 instance using the AWS Backint Agent is insufficient permissions. Option B directly addresses this by assigning an IAM role with the necessary permissions to the EC2 instance. This ensures the instance has the authority to write data to the specified S3 bucket. Option A is incorrect because operating system-level permissions are not relevant to accessing an S3 bucket; AWS access control is managed through IAM. Options C and D are incorrect because they deal with configuration parameters that are not directly related to permission issues. While incorrect configuration could cause problems, the `AccessDenied` error strongly points to a permissions issue.
75
A company uses a multi-account strategy for its SAP HANA and SAP BW/4HANA instances across development, QA, and production systems within the same AWS Region. Each system resides in its own VPC. They need to establish cross-VPC communication between these SAP systems, with the possibility of adding more SAP systems in the future. The solution must handle connectivity across the SAP systems and hundreds of AWS accounts while maximizing scalability and reliability. Which solution best meets these requirements? A. Create an AWS Transit Gateway in a central networking account. Attach the transit gateway to the AWS accounts. Set up routing and a network ACL to establish communication. B. Set up VPC peering between the accounts. Configure routing in each VPC to use the VPC peering links. C. Create a transit VPC that uses the hub-and-spoke model. Set up routing to use the transit VPC for communication between the SAP systems. D. Create a VPC link for each SAP system. Use the VPC links to connect the SAP systems.
A A is the correct answer because AWS Transit Gateway is designed for large-scale, multi-account network connectivity. It offers scalability and reliability for connecting hundreds of accounts, unlike VPC peering which becomes complex and less manageable at that scale. Option B (VPC peering) is not scalable for hundreds of accounts and would lead to a complex and hard-to-manage network. Option C (Transit VPC) is less scalable and more complex to manage than a Transit Gateway for a large number of accounts. Option D (VPC Link) is not suitable for this scenario as it's designed for connecting to other AWS services, not for complex cross-VPC communication between many accounts.
76
A global enterprise is running SAP ECC workloads on Oracle in an on-premises environment. The enterprise plans to migrate to SAP S/4HANA on AWS. The enterprise recently acquired two other companies. One acquired company runs SAP ECC on Oracle, and the other runs a non-SAP ERP system. The enterprise wants to consolidate all three ERP systems into one SAP S/4HANA system on AWS. Not all data from the acquired companies needs to be migrated. The enterprise needs a solution that minimizes cost and maximizes operational efficiency. Which solution best meets these requirements? A. Perform a lift-and-shift migration of all systems to AWS. Migrate the non-SAP ERP system to SAP ECC. Convert all three systems to SAP S/4HANA using SAP Software Update Manager (SUM) Database Migration Option (DMO). Consolidate the three SAP S/4HANA systems into a final SAP S/4HANA system. Decommission the other systems. B. Perform a lift-and-shift migration of all systems to AWS. Migrate the enterprise's initial system to SAP HANA, then convert to SAP S/4HANA. Consolidate the two acquired company systems using the Selective Data Transition approach with SAP Data Management and Landscape Transformation (DMLT). C. Use SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move to re-architect the enterprise’s initial system to SAP S/4HANA and change the platform to AWS. Consolidate the two acquired company systems with this SAP S/4HANA system using the Selective Data Transition approach with SAP Data Management and Landscape Transformation (DMLT). D. Use SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move to re-architect all systems to SAP S/4HANA and change the platform to AWS. Consolidate all three SAP S/4HANA systems into a final SAP S/4HANA system. Decommission the other systems.
C The discussion highlights that option C is the most cost-effective and efficient. Options A and D are less efficient because they involve migrating and converting all systems, including unnecessary data from the non-SAP system. Option B is less efficient because it doesn't leverage the selective data transition approach for the initial system migration. Option C efficiently uses DMO with System Move for the initial system, leveraging selective data transition for the acquisitions, minimizing data migration and cost.
77
A company’s SAP basis team is responsible for database backups in Amazon S3. The company frequently needs to restore the last 3 months of backups into the pre-production SAP system to perform tests and analyze performance. Previously, an employee accidentally deleted backup files from the S3 bucket. The SAP basis team wants to prevent accidental deletion of backup files in the future. Which solution will meet these requirements? A. Create a new resource-based policy that prevents deletion of the S3 bucket. B. Enable versioning and multi-factor authentication (MFA) on the S3 bucket. C. Create signed cookies for the backup files in the S3 bucket. Provide the signed cookies to authorized users only. D. Apply an S3 Lifecycle policy to move the backup files immediately to S3 Glacier.
B The best solution is B because enabling versioning prevents accidental deletion by preserving previous versions of the files. MFA adds an extra layer of security, requiring two-factor authentication for any actions on the bucket, further reducing the risk of accidental deletion. Option A is incorrect because preventing all deletions on the entire S3 bucket is overly restrictive and could hinder other necessary operations. Option C is incorrect because while signed cookies can control access, they don't prevent accidental deletion by an authorized user. Option D is incorrect because moving files to Glacier is for archiving and retrieval is slower than needed for frequent testing and analysis. It doesn't directly prevent accidental deletion of the files in the primary S3 bucket.
78
A company wants to run SAP HANA on AWS in the eu-central-1 Region. The company must make the SAP HANA system highly available by using SAP HANA system replication. In addition, the company must create a disaster recovery (DR) solution that uses SAP HANA system replication in the eu-west-1 Region. As prerequisites, the company has confirmed that Inter-AZ latency is less than 1 ms and that Inter-Region latency is greater than 1 ms. Which solutions will meet these requirements? (Choose two.) A. Install the tier 1 primary system and the tier 2 secondary system in eu-central-1. Configure the tier 1 system in Availability Zone 1. Configure the tier 2 system in Availability Zone 2. Configure SAP HANA system replication between tier 1 and tier 2 by using ASYNC replication mode. Install the DR tier 3 secondary system in eu-west-1 by using SYNC replication mode. B. Install the tier 1 primary system and the tier 2 secondary system in eu-central-1. Configure the tier 1 system in Availability Zone 1. Configure the tier 2 system in Availability Zone 2. Configure SAP HANA system replication between tier 1 and tier 2 by using SYNC replication mode. Install the DR tier 3 secondary system in eu-west-1 by using ASYNC replication mode. C. Install the tier 1 primary system and the tier 2 secondary system in eu-central-1. Configure the tier 1 system in Availability Zone 1. Configure the tier 2 system in Availability Zone 2. Configure SAP HANA system replication between tier 1 and tier 2 by using SYNC replication mode. Install the DR tier 3 secondary system in eu-west-1. Store daily backups from tier 1 in an Amazon S3 bucket in eu-central-1. Use S3 Cross-Region Replication to copy the daily backups to eu-west-1, where they can be restored if needed. D. Install the tier 1 primary system in eu-central-1. Install the tier 2 secondary system and the DR tier 3 secondary system in eu-west-1. Configure the tier 2 system in Availability Zone 1. Configure the tier 3 system in Availability Zone 2. Configure SAP HANA system replication between all tiers by using ASYNC replication mode. E. Install the tier 1 primary system and the tier 2 secondary system in eu-central-1. Configure the tier 1 system in Availability Zone 1. Configure the tier 2 system in Availability Zone 2. Configure SAP HANA system replication between tier 1 and tier 2 by using SYNCMEM replication mode. Install the DR tier 3 secondary system in eu-west-1 by using ASYNC replication mode.
B, E Option B is correct because it uses synchronous replication within the eu-central-1 region (low latency) for high availability and asynchronous replication for the disaster recovery site in eu-west-1 (high latency). Synchronous replication is suitable for high availability within a low-latency region, ensuring data consistency. Asynchronous replication is appropriate for disaster recovery across regions with higher latency, prioritizing data availability over strict consistency. Option E is correct for similar reasons to B. It uses synchronous replication (SYNCMEM, a variant of synchronous) within the eu-central-1 region and asynchronous replication for disaster recovery in eu-west-1. Option A is incorrect because it uses synchronous replication across regions with high latency, which is not suitable. Synchronous replication requires low latency for reliable operation. Option C is incorrect because while it addresses high availability within the region, it uses backups for disaster recovery instead of SAP HANA system replication, which was a requirement. Option D is incorrect because it uses asynchronous replication within a high-latency (inter-region) configuration for the DR setup, which could lead to significant data loss in a failure scenario. It also places all the secondary systems in a single region.
79
A company’s basis administrator is planning to deploy SAP on AWS in Linux. The basis administrator must set up the proper storage to store SAP HANA data and log volumes. Which storage options should the basis administrator choose to meet these requirements? (Choose two.) A. Amazon Elastic Block Store (Amazon EBS) Throughput Optimized HDD (st1) B. Amazon Elastic Block Store (Amazon EBS) Provisioned OPS SSD (io1, io2) C. Amazon S3 D. Amazon Elastic File System (Amazon EFS) E. Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2, gp3)
B and E The correct answers are B and E because: * **B. Amazon Elastic Block Store (Amazon EBS) Provisioned OPS SSD (io1, io2):** Provisioned IOPS SSDs (io1 and io2) offer high performance and low latency, making them suitable for the demanding I/O requirements of SAP HANA databases, particularly for data and log volumes which require fast read/write speeds. * **E. Amazon Elastic Block Store (Amazon EBS) General Purpose SSD (gp2, gp3):** General Purpose SSDs (gp2 and gp3) provide a balance of performance and cost-effectiveness. While not as high-performance as io1/io2, they are still a viable option for storing SAP HANA data and log volumes, especially when considering cost optimization. Options A, C, and D are incorrect because: * **A. Amazon Elastic Block Store (Amazon EBS) Throughput Optimized HDD (st1):** HDDs are significantly slower than SSDs and are not recommended for the demanding I/O needs of SAP HANA. * **C. Amazon S3:** Amazon S3 is an object storage service, not a block storage service suitable for directly mounting and accessing SAP HANA data and log volumes. It's more appropriate for backups and archiving. * **D. Amazon Elastic File System (Amazon EFS):** Amazon EFS is a network file system, not a block storage service, and is also not suitable for the direct storage of SAP HANA data and log volumes. It is better suited for shared file systems.
80
A company has deployed a highly available SAP NetWeaver system on SAP HANA into a VPC. The system is distributed across multiple Availability Zones within a single AWS Region. SAP NetWeaver is running on SUSE Linux Enterprise Server for SAP. SUSE Linux Enterprise High Availability Extension is configured to protect SAP ASCS and ERS instances and uses the overlay IP address concept. The SAP shared files `/sapmnt` and `/usr/sap/trans` are hosted on an Amazon Elastic File System (Amazon EFS) file system. The company needs a solution that uses already-existing private connectivity to the VPC. The SAP NetWeaver system must be accessible through the SAP GUI client tool. Which solutions will meet these requirements? (Choose two.) A. Deploy an Application Load Balancer. Configure the overlay IP address as a target. B. Deploy a Network Load Balancer. Configure the overlay IP address as a target. C. Use an Amazon Route 53 private zone. Create an A record that has the overlay IP address as a target. D. Use AWS Transit Gateway. Configure the overlay IP address as a static route in the transit gateway route table. Specify the VPC as a target. E. Use a NAT gateway. Configure the overlay IP address as a target.
B, D
81
A company is planning to migrate its on-premises SAP application to AWS. The application runs on VMware vSphere. The SAP ERP Central Component (SAP ECC) server runs on an IBM Db2 database that is 2 TB in size. The company wants to migrate the database to SAP HANA. Which migration strategy will meet these requirements? A. Use AWS Application Migration Service (CloudEndure Migration). B. Use SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move. C. Use AWS Server Migration Service (AWS SMS). D. Use AWS Database Migration Service (AWS DMS).
B
82
A company wants to migrate its SAP workloads to AWS from another cloud provider. The company’s landscape consists of SAP S/4HANA, SAP BW/4HANA, SAP Solution Manager, and SAP Web Dispatcher. SAP Solution Manager is running on SAP HANA. The company wants to change the operating system from SUSE Linux Enterprise Server to Red Hat Enterprise Linux as a part of this migration. The company needs a solution that results in the least possible downtime for the SAP S/4HANA and SAP BW/4HANA systems. Which migration solution will meet these requirements? A. Use SAP Software Provisioning Manager to perform a system export/import for SAP S/4HANA, SAP BW/4HANA, SAP Solution Manager, and SAP Web Dispatcher. B. Use backup and restore for SAP S/4HANA, SAP BW/4HANA, and SAP Solution Manager. Reinstall SAP Web Dispatcher on AWS with the necessary configuration. C. Use backup and restore for SAP S/4HANA and SAP BW/4HANA. Use SAP Software Provisioning Manager to perform a system export/import for SAP Solution Manager. Reinstall SAP Web Dispatcher on AWS with the necessary configuration. D. Use SAP HANA system replication to replicate the data between the source system and the target AWS system for SAP S/4HANA and SAP BW/4HANA. Use SAP Software Provisioning Manager to perform a system export/import for SAP Solution Manager. Reinstall SAP Web Dispatcher on AWS with the necessary configuration.
D The discussion highlights that option D is the best solution because it minimizes downtime for SAP S/4HANA and SAP BW/4HANA. SAP HANA system replication allows near-zero downtime migration by replicating data to the AWS system, and the use of SAP Software Provisioning Manager for SAP Solution Manager is efficient. Reinstalling the Web Dispatcher is necessary due to the OS change. Options A, B, and C are less suitable because they involve backup and restore, which leads to significant downtime compared to replication. While option C uses a combination of methods, it still results in more downtime for S/4HANA and BW/4HANA than option D. Option A tries to use Software Provisioning Manager for all systems which isn't ideal for minimizing downtime of the primary systems (S/4HANA and BW/4HANA).
83
A company is running an SAP on Oracle system on IBM Power architecture in an on-premises data center. The Oracle database is 15 TB in size. The company has set up a 100 Gbps AWS Direct Connect connection to AWS from the on-premises data center. Which solution should the company use to migrate the SAP system MOST quickly? A. Before the migration window, build a new installation of the SAP system on AWS by using SAP Software Provisioning Manager. During the migration window, export a copy of the SAP system and database by using the heterogeneous system copy process and R3load. Copy the output of the SAP system files to AWS through the Direct Connect connection. Import the SAP system to the new SAP installation on AWS. Switch over to the SAP system on AWS. B. Before the migration window, build a new installation of the SAP system on AWS by using SAP Software Provisioning Manager. Back up the Oracle database by using native Oracle tools. Copy the backup of the Oracle database to AWS through the Direct Connect connection. Import the Oracle database to the SAP system on AWS. Configure Oracle Data Guard to begin replicating on-premises database log changes from the SAP system to the new AWS system. During the migration window, use Oracle to replicate any remaining changes to the Oracle database hosted on AWS. Switch over to the SAP system on AWS. C. Before the migration window, build a new installation of the SAP system on AWS by using SAP Software Provisioning Manager. Create a staging Oracle database on premises to perform Cross Platform Transportable Tablespace (XTTS) conversion on the Oracle database. Take a backup of the converted staging database. Copy the converted backup to AWS through the Direct Connect connection. Import the Oracle database backup to the SAP system on AWS. Take regularly scheduled incremental backups and XTTS conversions of the staging database. Transfer these backups and conversions to the AWS target database. During the migration window, perform a final incremental Oracle backup. Convert the final Oracle backup by using XTTS. Replay the logs in the target Oracle database hosted on AWS. Switch over to the SAP system on AWS. D. Before the migration window, launch an appropriately sized Amazon EC2 instance on AWS to receive the migrated SAP database. Create an AWS Server Migration Service (AWS SMS) job to take regular snapshots of the on-premises Oracle hosts. Use AWS SMS to copy the snapshot as an AMI to AWS through the Direct Connect connection. During the migration window, take a final incremental SMS snapshot and copy the snapshot to AWS. Restart the SAP system by using the new up-to-date AMI. Switch over to the SAP system on AWS.
C The discussion indicates that option C, utilizing Cross Platform Transportable Tablespaces (XTTS), is the fastest method. This is because XTTS allows for efficient transfer of the Oracle database by moving tablespaces individually, minimizing downtime and data transfer time compared to full backups. The other options have significant drawbacks: A involves a full system copy which is time-consuming. B uses a full backup which is slower than XTTS and Data Guard replication, while still requiring a final replication step during the migration window. D uses AWS SMS which, while convenient, is not optimized for this specific type of migration from IBM Power to AWS. The heterogeneous nature of the migration rules out solutions that rely on features unsuitable for this transition. The consensus in the discussion points to C as the optimal solution for speed and efficiency in this scenario.
84
A company is planning to deploy a new SAP NetWeaver ABAP system on AWS with an Oracle database that runs on an Amazon EC2 instance. The EC2 instance uses a Linux-based operating system. The company needs a database storage solution that provides flexibility to adjust the IOPS regardless of the allocated storage size. Which solution will meet these requirements MOST cost-effectively? A. General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volumes B. Amazon Elastic File System (Amazon EFS) Standard-Infrequent Access (Standard-IA) storage class C. Amazon FSx for Windows File Server D. Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes
A The correct answer is A because gp3 volumes offer the ability to independently adjust IOPS and storage capacity, making them cost-effective for various workloads. While io2 volumes also allow IOPS adjustment, gp3 is generally more cost-effective. Option B, Amazon EFS Standard-IA, is designed for infrequent access and is not suitable for a database requiring high IOPS. Option C, Amazon FSx for Windows File Server, is a file system and not a suitable database storage solution. Option D, io2 volumes, provides high performance but at a higher cost than gp3 for this use case.
85
A company hosts multiple SAP applications on Amazon EC2 instances in a VPC. While monitoring the environment, the company notices that multiple port scans are attempting to connect to SAP portals inside the VPC. These port scans are originating from the same IP address block. The company must deny access to the VPC from all the offending IP addresses for the next 24 hours. Which solution will meet this requirement? A. Modify network ACLs that are associated with all public subnets in the VPC to deny access from the IP address block. B. Add a rule in the security group of the EC2 instances to deny access from the IP address block. C. Create a policy in AWS Identity and Access Management (IAM) to deny access from the IP address block. D. Configure the firewall in the operating system of the EC2 instances to deny access from the IP address block.
A A is correct because Network ACLs (NACLs) operate at the subnet level, controlling traffic entering and leaving subnets. By modifying the NACLs associated with public subnets to deny access from the offending IP address block, the company will prevent all traffic from that block from reaching the VPC. B is incorrect because Security Groups are associated with individual EC2 instances, not entire subnets. Blocking access at the Security Group level would only prevent access to specific instances, not the entire VPC, leaving other SAP applications potentially vulnerable. C is incorrect because IAM manages user access to AWS services, not network traffic. It is not the appropriate tool to control IP address access to a VPC. D is incorrect because configuring the OS firewall only affects the individual EC2 instance where it is applied. This solution is not suitable for denying access to the entire VPC from the offending IP address block.
86
A company wants to deploy an SAP HANA database on AWS by using AWS Launch Wizard for SAP. An SAP solutions architect needs to run a custom post-deployment script on the Amazon EC2 instance that Launch Wizard provisions. Which actions can the SAP solutions architect take to provide the post-deployment script in the Launch Wizard console? (Choose two.) A. Provide the FTP URL of the script. B. Provide the HTTPS URL of the script on a web server. C. Provide the Amazon S3 URL of the script. D. Write the script inline. E. Upload the script.
C, E
87
A company is planning to move its on-premises SAP HANA database to AWS. The company needs to migrate this environment to AWS as quickly as possible. An SAP solutions architect will use AWS Launch Wizard for SAP to deploy this SAP HANA workload. Which combination of steps should the SAP solutions architect follow to start the deployment of this workload on AWS? (Choose three.) A. Download the SAP HANA software. B. Download the AWS CloudFormation template for the SAP HANA deployment. C. Download and extract the SAP HANA software. Upload the SAP HANA software to an FTP server that Launch Wizard can access. D. Upload the unextracted SAP HANA software to an Amazon S3 destination bucket. Follow the S3 file path syntax for the software in accordance with Launch Wizard recommendations. E. Bring the operating system AMI by using the Bring Your Own Image (BYOI) model, or purchase the subscription for the operating system AMI from AWS Marketplace. F. Create the SAP file system by using Amazon Elastic Block Store (Amazon EBS) before the deployment.
A, D, E
88
A company has deployed SAP workloads on AWS. The AWS Data Provider for SAP is installed on the Amazon EC2 instance where the SAP application is running. An SAP solutions architect has attached an IAM role to the EC2 instance with the following policy: [Image](https://img.examtopics.com/aws-certified-sap-on-aws-specialty-pas-c01/image1.png) The AWS Data Provider for SAP is not returning any metrics to the SAP application. Which change should the SAP solutions architect make to the IAM permissions to resolve this issue? A. Add the `cloudwatch:ListMetrics` action to the policy statement with Sid `AWSDataProvider1`. B. Add the `cloudwatch:GetMetricStatistics` action to the policy statement with Sid `AWSDataProvider1`. C. Add the `cloudwatch:GetMetricStream` action to the policy statement with Sid `AWSDataProvider1`. D. Add the `cloudwatch:DescribeAlarmsForMetric` action to the policy statement with Sid `AWSDataProvider1`.
B The correct answer is B because the AWS Data Provider for SAP needs permission to retrieve metric statistics from CloudWatch. The `cloudwatch:GetMetricStatistics` action allows this retrieval. Option A is incorrect because `cloudwatch:ListMetrics` only lists available metrics, it doesn't retrieve their data. Option C is incorrect because `cloudwatch:GetMetricStream` is used for retrieving metrics to a different location, not directly for the SAP application's use. Option D is incorrect because `cloudwatch:DescribeAlarmsForMetric` deals with CloudWatch alarms, not the retrieval of metric data itself.
89
A company that has SAP workloads on premises plans to migrate an SAP environment to AWS. The company is new to AWS and has no prior setup. The company has the following requirements: * The application server and database server must be placed in isolated network configurations. * SAP systems must be accessible to the on-premises end users over the internet. * The cost of communications between the application server and the database server must be minimized. Which combination of steps should an SAP solutions architect take to meet these requirements? (Choose two.) A. Configure a Network Load Balancer for incoming connections from end users. B. Set up an AWS Site-to-Site VPN connection between the company’s on-premises network and AWS. C. Separate the application server and the database server by using different VPCs. D. Separate the application server and the database server by using different subnets and network security groups within the same VPC. E. Set up an AWS Direct Connect connection with a private VIF between the company’s on-premises network and AWS.
B, C The correct answer is B and C because: * **B. Set up an AWS Site-to-Site VPN connection between the company’s on-premises network and AWS:** This fulfills the requirement of allowing on-premises end users to access SAP systems over the internet securely. A VPN provides a secure, encrypted connection. * **C. Separate the application server and the database server by using different VPCs:** This best meets the requirement of isolating the application and database servers while minimizing communication costs. Different VPCs provide the strongest isolation, and traffic between them would use AWS's internal, highly optimized network, minimizing cost compared to traversing the public internet. Option A is incorrect because a Network Load Balancer is not required to meet the stated criteria, it adds unnecessary complexity and cost. Option D is incorrect because while using different subnets and security groups within the same VPC provides *some* isolation, it's less robust than using separate VPCs. It does not minimize communication costs as effectively. Option E is incorrect because AWS Direct Connect is a more expensive solution compared to a Site-to-Site VPN. While Direct Connect offers higher bandwidth, it's not needed given the requirement is to make the SAP systems accessible over the internet, not for extremely high-bandwidth connections. The discussion highlights the cost factor as a key consideration.
90
A company is running its SAP workload on AWS. The company’s security team has implemented the following requirements: * All Amazon EC2 instances for SAP must be SAP certified instance types. * Encryption must be enabled for all Amazon S3 buckets and Amazon Elastic Block Store (Amazon EBS) volumes. * AWS CloudTrail must be activated. * SAP system parameters must be compliant with business rules. * Detailed monitoring must be enabled for all instances. The company wants to develop an automated process to review the systems for compliance with the security team’s requirements. The process also must provide notification about any deviation from these standards. Which solution will meet these requirements? A. Use AWS AppConfig to model configuration data in an AWS Systems Manager Automation runbook. Schedule this Systems Manager Automation runbook to monitor for compliance with all the requirements. Integrate AWS AppConfig with Amazon CloudWatch for notification purposes. B. Use AWS Config managed rules to monitor for compliance with all the requirements. Use Amazon EventBridge (Amazon CloudWatch Events) and Amazon Simple Notification Service (Amazon SNS) for email notification when a resource is flagged as noncompliant. C. Use AWS Trusted Advisor to monitor for compliance with all the requirements. Use Trusted Advisor preferences for email notification when a resource is flagged as noncompliant. D. Use AWS Config managed rules to monitor for compliance with the requirements, except for the SAP system parameters. Create AWS Config custom rules to validate the SAP system parameters. Use Amazon EventBridge (Amazon CloudWatch Events) and Amazon Simple Notification Service (Amazon SNS) for email notification when a resource is flagged as noncompliant.
D AWS Config is the most appropriate service for monitoring the compliance of AWS resources against defined configurations. However, it lacks built-in functionality to directly assess the compliance of SAP system parameters. Option D correctly addresses this by using managed rules for the readily-auditable AWS configurations and custom rules for the SAP-specific parameter checks. Amazon EventBridge and SNS provide the necessary automated notification system for deviations from the established standards. Option A is incorrect because AWS AppConfig is designed for application configuration management, not for auditing security compliance across various AWS services. Option B is incorrect because it fails to address the requirement for custom rules to monitor SAP system parameters. Option C is incorrect because AWS Trusted Advisor is primarily designed for best practice recommendations and not for detailed, automated compliance monitoring with custom rule creation capabilities.
91
A company is hosting its SAP workloads on AWS. An SAP solutions architect is designing a high availability architecture for the company’s production SAP S/4HANA and SAP BW/4HANA workloads. These workloads have the following requirements: * Redundant SAP application servers that consist of a primary application server (PAS) and an additional application server (AAS) * ASCS and ERS instances that use a failover cluster * Database high availability with a primary DB instance and a secondary DB instance How should the SAP solutions architect design the architecture to meet these requirements? A. Deploy ASCS and ERS cluster nodes in different subnets within the same Availability Zone. Deploy the PAS instance and AAS instance in different subnets within the same Availability Zone. Deploy the primary DB instance and secondary DB instance in different subnets within the same Availability Zone. Deploy all the components in the same VPC. B. Deploy ASCS and ERS cluster nodes in different subnets within the same Availability Zone. Deploy the PAS instance and AAS instance in different subnets within the same Availability Zone. Deploy the primary DB instance and secondary DB instance in different subnets within the same Availability Zone. Deploy the ASCS instance, PAS instance, and primary DB instance in one VPC. Deploy the ERS instance, AAS instance, and secondary DB instance in a different VPC. C. Deploy ASCS and ERS cluster nodes in different subnets across two Availability Zones. Deploy the PAS instance and AAS instance in different subnets across two Availability Zones. Deploy the primary DB instance and secondary DB instance in different subnets across two Availability Zones. Deploy all the components in the same VPC. D. Deploy ASCS and ERS cluster nodes in different subnets across two Availability Zones. Deploy the PAS instance and AAS instance in different subnets across two Availability Zones. Deploy the primary DB instance and secondary DB instance in different subnets across two Availability Zones. Deploy the ASCS instance, PAS instance, and primary DB instance in one VPC. Deploy the ERS instance, AAS instance, and secondary DB instance in a different VPC.
C The correct answer is C because it fulfills all the high availability requirements while maintaining a single VPC for easier management. Distributing components across multiple Availability Zones ensures redundancy and protection against AZ failures. Using different subnets within each AZ further enhances isolation and reduces the impact of a single subnet outage. Options A and B only utilize a single AZ, making them vulnerable to a single point of failure. Options B and D incorrectly use two VPCs, increasing complexity and hindering efficient network communication.
92
A company has deployed SAP HANA in the AWS Cloud. The company needs its SAP HANA database to be highly available. An SAP solutions architect has deployed the SAP HANA database in separate Availability Zones in a single AWS Region. SUSE Linux Enterprise High Availability Extension is configured with an overlay IP address. The overlay IP resource agent has the following IAM policy: [Image](https://img.examtopics.com/aws-certified-sap-on-aws-specialty-pas-c01/image2.png) During a test of failover, the SAP solutions architect finds that the overlay IP address does not change to the secondary Availability Zone. Which change should the SAP solutions architect make in the policy statement for Sid oip1 to fix this error? A. Change the Action element to ec2:CreateRoute. B. Change the Action element to ec2:ReplaceRoute. C. Change the Action element to ec2:ReplaceRouteTableAssociation. D. Change the Action element to ec2:ReplaceTransitGatewayRoute.
B
93
A company is planning to migrate its on-premises SAP applications to AWS. The applications are based on Windows operating systems. A file share stores the transport directories and third-party application data on the network-attached storage of the company’s on-premises data center. The company’s plan is to lift and shift the SAP applications and the file share to AWS. The company must follow AWS best practices for the migration. Which AWS service should the company use to host the transport directories and third-party application data on AWS? A. Amazon Elastic Block Store (Amazon EBS) B. AWS Storage Gateway C. Amazon Elastic File System (Amazon EFS) D. Amazon FSx for Windows File Server
D Amazon FSx for Windows File Server is the correct answer because the question specifies that the applications are based on Windows operating systems and the company is performing a lift-and-shift migration. Amazon FSx for Windows File Server is a fully managed service that provides file storage compatible with Windows environments, making it the ideal solution for this scenario. Option A, Amazon EBS, is incorrect because it's a block storage service, not a file storage service. Option B, AWS Storage Gateway, is incorrect because it's designed to connect on-premises storage to the cloud, not to directly host files in the cloud. Option C, Amazon EFS, is incorrect because while it's a file storage service, it's not optimized for Windows environments in the same way that Amazon FSx for Windows File Server is.
94
A company is running SAP ERP Central Component (SAP ECC) with a Microsoft SQL Server database on AWS. A solutions architect must attach an additional 1 TB Amazon Elastic Block Store (Amazon EBS) volume. The company needs to write the SQL Server database backups to this EBS volume before moving the database backups to Amazon S3 for long-term storage. Which EBS volume type will meet these requirements MOST cost-effectively? A. Throughput Optimized HDD (st1) B. Provisioned IOPS SSD (io2) C. General Purpose SSD (gp3) D. Cold HDD (sc1)
A. Throughput Optimized HDD (st1) Throughput Optimized HDD (st1) is the most cost-effective option for this scenario because it offers a balance of performance and cost. While SSDs (io2 and gp3) offer faster performance, the primary requirement is cost-effectiveness for storing backups before moving them to S3. The backup process doesn't require the extremely high IOPS of an io2 volume, and gp3, while a good general-purpose option, is likely more expensive than st1 for this large-volume, sequential write operation. Cold HDD (sc1) is designed for infrequent access and is unsuitable for backups that need to be written relatively quickly. The st1 volume type is optimized for sequential workloads, which is ideal for writing large database backups.
95
An SAP basis architect is configuring high availability for a critical SAP system on AWS. The SAP basis architect is using an overlay IP address to route traffic to the subnets across multiple Availability Zones within an AWS Region for the system’s SAP HANA database. What should the SAP basis architect do to route the traffic to the Amazon EC2 instance of the active SAP HANA database? A. Edit the route in the route table of the VPC that includes the EC2 instance that runs SAP HANA. Specify the overlay IP address as the destination. Specify the private IP address of the EC2 instance as the target. B. Edit the inbound and outbound rules in the security group of the EC2 instance that runs SAP HANA. Allow traffic for SAP HANA specific ports from the overlay IP address. C. Edit the network ACL of the subnet that includes the EC2 instance that runs SAP HANA. Allow traffic for SAP HANA specific ports from the overlay IP address. D. Edit the route in the route table of the VPC that includes the EC2 instance that runs SAP HANA. Specify the overlay IP address as the destination. Specify the elastic network interface of the EC2 instance as the target.
D The correct answer is D because routing traffic to an EC2 instance based on an overlay IP address requires manipulating the route table at the VPC level. The overlay IP address is the destination for the route, and the target should be the Elastic Network Interface (ENI) of the active SAP HANA EC2 instance. This ensures that all traffic destined for the overlay IP address is directed to the correct instance. Option A is incorrect because using the private IP address as the target in the route table is not appropriate for an overlay IP address. The overlay IP is a virtual IP and is not directly associated with a private IP. Option B is incorrect because security groups manage inbound and outbound traffic based on ports and protocols, not routing at the network level. They don't handle directing traffic to different instances based on an overlay IP. Option C is incorrect because Network ACLs operate at the subnet level and are not the appropriate mechanism to handle routing traffic based on an overlay IP address. They filter traffic, not route it between instances.
96
A company has an SAP Business One system that runs on SUSE Linux Enterprise Server 12 SP3. The company wants to migrate the system to AWS. An SAP solutions architect selects a homogeneous migration strategy that uses AWS Application Migration Service (CloudEndure Migration). After the server migration process is finished, the SAP solutions architect launches an Amazon EC2 test instance from the R5 instance family. After a few minutes, the EC2 console reports that the test instance has failed an instance status check. Network connections to the instance are refused. How can the SAP solutions architect solve this problem? A. Reboot the instance to initiate instance migration to another host. B. Request an instance limit increase for the AWS Region where the test instance is being launched. C. Create a ticket for AWS Support that documents the test server instance ID. Wait for AWS to update the host of the R5 instance. D. Install the missing drivers on the source system. Wait for the completion of migration synchronization. Launch the test instance again.
D The correct answer is D because the problem indicates a failure of the instance status check and network connection refusal, suggesting an issue with the instance's drivers. Option A is incorrect because rebooting the instance will not address the underlying driver issue. Option B is incorrect because an instance limit increase is irrelevant to a single instance failing a status check and network connection. Option C is incorrect because while creating a support ticket might eventually resolve the issue, it's less direct and less efficient than first ensuring the source system has necessary drivers. Installing the missing drivers on the source system (D), re-synchronizing the migration, and then relaunching the instance directly addresses the likely root cause of the problem.
97
A company plans to migrate its SAP NetWeaver deployment to AWS. The deployment runs on a Microsoft SQL Server database. The company plans to change the source database from SQL Server to SAP HANA as part of this process. Which migration tools or methods should an SAP solutions architect use to meet these requirements? (Choose two.) A. SAP HANA classical migration B. SAP HANA system replication C. SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move D. SAP HANA backup and restore E. SAP homogeneous system copy
A, C A and C are the correct answers because they directly address the database migration from SQL Server to SAP HANA within the context of an SAP NetWeaver system move to AWS. * **A. SAP HANA classical migration:** This method is suitable for migrating from a non-HANA database (like SQL Server) to SAP HANA. It often involves an upgrade step followed by the database migration. * **C. SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move:** SUM DMO provides a comprehensive approach to migrating the database as part of a larger system landscape change, which includes moving to AWS in this scenario. B is incorrect because system replication is primarily for high availability and disaster recovery, not for initial database migration. D is incorrect as a backup and restore is unsuitable for migrating complex SAP systems. E is incorrect because it's used for copying similar systems; this scenario involves a database type change.
98
A company is running an SAP HANA database on AWS. The company wants to manage historical, infrequently accessed warm data for a native SAP HANA use case. An SAP solutions architect needs to recommend a solution that can provide online data storage in extended store, available for queries and updates. The solution must be an integrated component of the SAP HANA database and must allow the storage of up to five times more data in the warm tier than in the hot tier. Which solution will meet these requirements? A. Use Amazon Data Lifecycle Manager (Amazon DLM) with SAP Data Hub to move data in and out of the SAP HANA database to Amazon S3. B. Use an SAP HANA extension node. C. Use SAP HANA dynamic tiering as an optional add-on to the SAP HANA database. D. Use Amazon Data Lifecycle Manager (Amazon DLM) with SAP HANA spark controller so that SAP HANA can access the data through the Spark SQL SDA adapter.
C The correct answer is C because SAP HANA dynamic tiering is an integrated component of the SAP HANA database designed for managing warm data in an extended store. It allows for online data storage, enabling queries and updates, and supports storing up to five times more data in the warm tier than the hot tier. Options A and D involve moving data outside the SAP HANA database to S3, violating the requirement for an integrated solution. Option B, an SAP HANA extension node, doesn't specifically address warm data management in the way described.
99
A company has an SAP environment running on AWS. They want to enhance security by restricting Amazon EC2 Instance Metadata Service (IMDS) to IMDSv2 only, but their current configuration supports both IMDSv1 and IMDSv2. The security enhancement must not cause an SAP outage. What should the company do *before* applying this security enhancement to their EC2 instances running the SAP environment? A. Ensure that the SAP kernel versions are 7.45 or later. B. Ensure that the EC2 instances are Nitro based. C. Ensure that the AWS Data Provider for SAP is installed on each EC2 instance. D. Stop the EC2 instances.
A The correct answer is A because SAP kernel versions 7.45 and later are compatible with IMDSv2. Applying the IMDSv2-only restriction without this compatibility would likely cause an SAP outage. Option B is incorrect because Nitro-based instances are not directly relevant to IMDSv1/v2 compatibility. Option C is incorrect as the AWS Data Provider for SAP is unrelated to IMDS version compatibility. Option D is incorrect because stopping the instances is unnecessary and disruptive; upgrading the SAP kernel is the necessary preparatory step.
100
Business users are reporting timeouts during periods of peak query activity on an enterprise SAP HANA data mart. An SAP system administrator has discovered that at peak volume, the CPU utilization increases rapidly to 100% for extended periods on the x1.32xlarge Amazon EC2 instance where the database is installed. However, the SAP HANA database is occupying only 1,120 GiB of the available 1,952 GiB on the instance. I/O wait times are not increasing. Extensive query tuning and system tuning have not resolved this performance problem. Which solutions should the SAP system administrator use to improve the performance? (Choose two.) A. Reduce the global_allocation_limit parameter to 1,120 GiB. B. Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs. C. Move to a scale-out architecture for SAP HANA with at least three x1.16xlarge instances. D. Modify the Amazon Elastic Block Store (Amazon EBS) volume type from General Purpose to Provisioned IOPS for all SAP HANA data volumes. E. Change to a supported compute optimized instance type for SAP HANA.
B, C The correct answers are B and C because: * **B. Migrate the SAP HANA database to an EC2 High Memory instance with a larger number of available vCPUs:** The problem is CPU saturation, not memory or I/O. A larger instance with more vCPUs directly addresses this issue by providing more processing power to handle peak query loads. * **C. Move to a scale-out architecture for SAP HANA with at least three x1.16xlarge instances:** While scaling up (B) is generally preferred first, the persistent 100% CPU utilization suggests that even a larger single instance might eventually hit a bottleneck. A scale-out architecture distributes the workload across multiple instances, improving scalability and resilience. The incorrect answers are: * **A. Reduce the global_allocation_limit parameter to 1,120 GiB:** This is incorrect because the problem is CPU utilization, not memory allocation. Reducing the memory limit will not address the CPU bottleneck. * **D. Modify the Amazon Elastic Block Store (Amazon EBS) volume type from General Purpose to Provisioned IOPS for all SAP HANA data volumes:** This is unnecessary because I/O wait times are not increasing, indicating that storage is not the bottleneck. * **E. Change to a supported compute optimized instance type for SAP HANA:** This is incorrect because SAP HANA does not certify compute-optimized instances for in-memory databases. Memory-optimized instances (r*, x*, u*) are the appropriate choice.
101
A company is planning to migrate its SAP workloads to AWS. The company will use two VPCs. One VPC will be for production systems, and one VPC will be for non-production systems. The company will host the non-production systems and the primary node of all the production systems in the same Availability Zone. What is the MOST cost-effective way to establish a connection between the production systems and the non-production systems? A. Create an AWS Transit Gateway. Attach the VPCs to the transit gateway. Add the appropriate routes in the subnet route tables. B. Establish a VPC peering connection between the two VPCs. Add the appropriate routes in the subnet route tables. C. Create an internet gateway in each VPC. Use an AWS Site-to-Site VPN connection between the two VPCs. Add the appropriate routes in the subnet route tables. D. Set up an AWS Direct Connect connection between the two VPCs. Add the appropriate routes in the subnet route tables.
B
102
An SAP engineer has deployed an SAP S/4HANA system on an Amazon EC2 instance running Linux. The SAP license key has been installed. After a while, the newly installed SAP instance presents an error indicating that the SAP license key is not valid because the SAP system’s hardware key changed. There have been no changes to the EC2 instance or its configuration. Which solution will permanently resolve this issue? A. Perform SAP kernel patching. B. Apply a new SAP license that uses a new hardware key. Install the new key. C. Set the SLIC_HW_VERSION Linux environment variable. D. Reboot the EC2 instance.
C The correct answer is C because the SAP license key is tied to a hardware key generated from the system's hardware and OS configuration. In virtualized environments like Amazon EC2, these parameters can change, invalidating the key. Setting the `SLIC_HW_VERSION` environment variable ensures a consistent hardware key, preventing future license invalidations. Options A and D are incorrect because they don't address the root cause of the hardware key changing. Option B is incorrect because it's a workaround, not a permanent solution, and repeatedly obtaining new licenses is inefficient.
103
A company is using SAP NetWeaver with Java on AWS. The company has updated its generation of Amazon EC2 instances to the most recent generation of EC2 instances. When the company tries to start SAP, the startup fails. The log indicates that the SAP license expired or is not valid. What is the reason for this issue? A. The instance ID changed as part of the EC2 generation change. B. The instance’s hypervisor changed from Xen to Nitro. C. The SAP Java Virtual Machine (SAP JVM) is not compatible with the new instance type. D. An EC2 generation change is not supported for SAP Java-based systems.
B The correct answer is B because a change in hypervisor (from Xen to Nitro) can alter the MAC address of the network interface. SAP licenses are often tied to the MAC address, rendering the existing license invalid after the hypervisor change. Option A is incorrect because the instance ID typically does not change with an EC2 generation update. Option C is incorrect because incompatibility between the SAP JVM and a new instance type would likely manifest as a different error, not specifically a license issue. Option D is incorrect because EC2 generation changes are supported; however, the underlying change in hypervisor and associated MAC address is what causes the license problem.
104
A company is planning to move all its SAP applications to Amazon EC2 instances in a VPC. Recently, the company signed a multiyear contract with a payroll software-as-a-service (SaaS) provider. Integration with the payroll SaaS solution is available only through public web APIs. Corporate security guidelines state that all outbound traffic must be validated against an allow list. The payroll SaaS provider provides only fully qualified domain name (FQDN) addresses and no IP addresses or IP address ranges. Currently, an on-premises firewall appliance filters FQDNs. The company needs to connect an SAP Process Orchestration (SAP PO) system to the payroll SaaS provider. What must the company do on AWS to meet these requirements? A. Add an outbound rule to the security group of the SAP PO system to allow the FQDN of the payroll SaaS provider and deny all other outbound traffic. B. Add an outbound rule to the network ACL of the subnet that contains the SAP PO system to allow the FQDN of the payroll SaaS provider and deny all other outbound traffic. C. Add an AWS WAF web ACL to the VPC. Add an outbound rule to allow the SAP PO system to connect to the FQDN of the payroll SaaS provider. D. Add an AWS Network Firewall to the VPC. Add an outbound rule to allow the SAP PO system to connect to the FQDN of the payroll SaaS provider.
D Security groups operate at the instance level and network ACLs operate at the subnet level. Neither of these can filter traffic based on FQDNs. AWS WAF (Web Application Firewall) is designed to protect web applications from attacks and does not support FQDN filtering for outbound traffic. Only AWS Network Firewall allows for filtering outbound traffic based on FQDNs, fulfilling the requirement of the allow list policy and the SaaS provider's limitation to only providing FQDNs. Therefore, deploying an AWS Network Firewall and configuring an outbound rule to allow only the specified FQDN is the correct solution.
105
A company uses an SAP application that runs performance-sensitive batch jobs which can be restarted safely. The SAP application has six application servers and runs reliably as long as application availability remains greater than 60%. The company wants to migrate this application to AWS using a cluster with two Availability Zones. How should the company distribute the SAP application servers to maintain system reliability? A. Distribute the SAP application servers equally across three partition placement groups. B. Distribute the SAP application servers equally across three Availability Zones. C. Distribute the SAP application servers equally across two Availability Zones. D. Create an Amazon EC2 Auto Scaling group across two Availability Zones. Set a minimum capacity value of 4.
D The correct answer is D because it addresses the requirement for greater than 60% availability while acknowledging the constraints of the existing infrastructure. Option D utilizes an Auto Scaling group across two Availability Zones with a minimum of four servers. This ensures that even if one entire Availability Zone fails, at least four servers remain operational (meeting the 60% availability requirement). Option A is incorrect because partition placement groups do not inherently provide the same level of high availability as Availability Zones. Option B is incorrect because the problem statement specifies only two Availability Zones are available. Option C, while seemingly reasonable, lacks the automated scaling and recovery provided by an Auto Scaling group; if a server failure occurs, it does not automatically replace failed instances.
106
A company is starting a new project to implement an SAP landscape with multiple accounts that belong to multiple teams in the us-east-2 Region. These teams include procurement, finance, sales, and human resources. An SAP solutions architect has started designing this new landscape and the AWS account structures. The company wants to use automation as much as possible. The company also wants to secure the environment, implement federated access to accounts, centralize logging, and establish cross-account security audits. In addition, the company’s management team needs to receive a top-level summary of policies that are applied to the AWS accounts. What should the SAP solutions architect do to meet these requirements? A. Use AWS CloudFormation StackSets to apply SCPs to multiple accounts in multiple Regions. Use an Amazon CloudWatch dashboard to check the applied policies in the accounts. B. Use an AWS Elastic Beanstalk blue/green deployment to create IAM policies and apply them to multiple accounts together. Use an Amazon CloudWatch dashboard to check the applied policies in the accounts. C. Implement guardrails by using AWS CodeDeploy and AWS CodePipeline to deploy SCPs into each account. Use the CodePipeline deployment dashboard to check the applied policies in the accounts. D. Apply SCPs through AWS Control Tower. Use the AWS Control Tower integrated dashboard to check the applied policies in the accounts.
D
107
A company is running its SAP workloads on premises and needs to migrate the workloads to AWS. All the workloads are running on SUSE Linux Enterprise Server and Oracle Database. The company’s landscape consists of SAP ERP Central Component (SAP ECC), SAP Business Warehouse (SAP BW), and SAP NetWeaver systems. The company has a dedicated AWS Direct Connect connection between its on-premises environment and AWS. The company needs to migrate the systems to AWS with the least possible downtime. Which migration solution will meet these requirements? A. Use SAP Software Provisioning Manager to perform an export of the systems. Copy the export to Amazon S3. Use SAP Software Provisioning Manager to perform an import of the systems to SUSE Linux Enterprise Server and Oracle Database on AWS. B. Use SAP Software Provisioning Manager to perform parallel export/import of the systems to migrate the systems to SUSE Linux Enterprise Server and Oracle Database on AWS. C. Use SAP Software Provisioning Manager to perform parallel export/import of the systems to migrate the systems to Oracle Enterprise Linux and Oracle Database on AWS. D. Use SAP Software Provisioning Manager to perform an export of the systems. Copy the export to Amazon S3. Use SAP Software Provisioning Manager to perform an import of the systems to Oracle Enterprise Linux and Oracle Database on AWS.
C The correct answer is C because Oracle databases for SAP workloads on AWS require Oracle Enterprise Linux. Options A and B are incorrect because they use SUSE Linux, which is not supported by Oracle for SAP on AWS. Option D is incorrect because it uses a sequential export/import process, which would lead to more downtime compared to the parallel approach in option C. The parallel export/import method in option C minimizes downtime.
108
A company is designing a disaster recovery (DR) strategy for an SAP HANA database that runs on an Amazon EC2 instance in a single Availability Zone. The company can tolerate a long RTO and an RPO greater than zero if it means that the company can save money on its DR process. The company has configured an Amazon CloudWatch alarm to automatically recover the EC2 instance if the instance experiences an unexpected issue. The company has set up AWS Backint Agent for SAP HANA to save the backups into Amazon S3. What is the MOST cost-effective DR option for the company's SAP HANA database? A. Set up AWS CloudFormation to automatically launch a new EC2 instance for the SAP HANA database in a second Availability Zone from backups that are stored in Amazon S3. When the SAP HANA database is operational, perform a database restore by using the standard SAP HANA restore process. B. Launch a secondary EC2 instance for the SAP HANA database on a less powerful EC2 instance type in a second Availability Zone. Configure SAP HANA system replication with the preload option turned off. C. Launch a secondary EC2 instance for the SAP HANA database on an equivalent EC2 instance type in a second Availability Zone. Configure SAP HANA system replication with the preload option turned on. D. Set up AWS CloudFormation to automatically launch a new EC2 instance for the SAP HANA database in a second Availability Zone from backups that are stored in Amazon Elastic Block Store (Amazon EBS). When the SAP HANA database is operational, perform a database restore by using the standard SAP HANA restore process.
A The most cost-effective option is A because it leverages backups stored in S3, which is generally a less expensive storage solution than using EBS (option D) or maintaining a continuously replicated secondary instance (options B and C). Options B and C are more expensive because they require a constantly running secondary instance, even if it's less powerful (option B). The company's tolerance for a longer RTO and RPO makes restoring from backups a viable and cost-effective solution.
109
A company is hosting an SAP HANA database on AWS. The company is automating operational tasks, including backup and system refreshes. The company wants to use SAP HANA Studio to perform data backup of an SAP HANA tenant database to a backint interface. The SAP HANA database is running in multi-tenant database container (MDC) mode. The company receives the following error message during an attempt to perform the backup: [Image](https://img.examtopics.com/aws-certified-sap-on-aws-specialty-pas-c01/image3.png) What should an SAP solutions architect do to resolve this issue? A. Set the execute permission for AWS Backint agent binary aws-backint-agent and for the launcher script aws-backint-agent-launcher.sh in the installation directory. B. Verify the installation steps. Create symbolic links (symlinks). C. Ensure that the catalog_backup_using_backint SAP HANA parameter is set to true. Ensure that the data_backup_parameter_file and log_backup_parameter_file parameters have the correct path location in the global.ini file. D. Add the SAP HANA system to SAP HANA Studio. Select multiple container mode, and then try to initiate the backup again.
D The discussion strongly suggests that option D is the correct answer. Multiple users point to AWS and SAP documentation indicating that this error message often arises when the SAP HANA system in multi-tenant container mode isn't correctly added to SAP HANA Studio, specifically noting the need to select "multiple container mode." Options A, B, and C address potential issues with the Backint agent configuration and permissions, which are not the root cause indicated by the error message and the overwhelming consensus in the discussion.
110
A company wants to implement SAP HANA on AWS with the Multi-AZ deployment option by using AWS Launch Wizard for SAP. The solution will use SUSE Linux Enterprise High Availability Extension for the high availability deployment. An SAP solutions architect must ensure that all the prerequisites are met and that the user inputs to start the guided deployment of Launch Wizard are valid. Which combination of steps should the SAP solutions architect take to meet these requirements? (Choose two.) A. Before starting the Launch Wizard deployment, create the underlying Amazon Elastic Block Store (Amazon EBS) volume types to use for SAP HANA data and log volumes based on the performance requirements. B. Use a value for the PaceMakerTag parameter that is not used by any other Amazon EC2 instances in the AWS Region where the system is being deployed. C. Ensure that the virtual hostname for the SAP HANA database that is used for the SUSE Linux Enterprise High Availability Extension configuration is not used in any other deployed accounts. D. Ensure that the VirtualIPAddress parameter is outside the VPC CIDR and is not being used in the route table that is associated with the subnets where primary and secondary SAP HANA instances will be deployed. E. Before starting the Launch Wizard deployment, set up the SUSE Linux Enterprise High Availability Extension network configuration and security group.
B, D
111
An SAP solutions architect is leading the SAP basis team for a company. The company’s SAP landscape includes SAP HANA database instances for the following systems: sandbox, development, quality assurance test (QAT), system performance test (SPT), and production. The sandbox, development, and QAT systems are running on Amazon EC2 On-Demand Instances. The SPT and production systems are running on EC2 Reserved instances. All the EC2 instances are using Provisioned IOPS SSO (io2) Amazon Elastic Block Store (Amazon EBS) volumes. The entire development team is in the same time zone and works from 8 AM to 6 PM. The sandbox system is for research and testing that are not critical. The SPT and production systems are business critical. The company runs load-testing jobs and stress-testing jobs on the QAT systems overnight to reduce testing duration. The company wants to optimize infrastructure cost for the existing AWS resources. How can the SAP solutions architect meet these requirements with the LEAST amount of administrative effort? A. Use a Spot Fleet instead of the Reserved Instances and On-Demand Instances. B. Use Amazon EventBridge (Amazon CloudWatch Events) and Amazon CloudWatch alarms to stop the development and sandbox EC2 instances from 7 PM every night to 7 AM the next day. C. Make the SAP basis team available 24 hours a day, 7 days a week to use the AWS CLI to stop and start the development and sandbox EC2 instances manually. D. Change the EBS volume type to Throughput Optimized HDD (st1) for the /hana/data and /hana/log file systems for the production and non-production SAP HANA databases.
B The best option is B because it automates the process of stopping and starting the non-critical instances using Amazon EventBridge and CloudWatch. This requires the least administrative effort compared to manual intervention (C) or changing the entire storage type (D). Option A is not the best choice because it introduces the risk of instance interruptions if there is insufficient spot instance capacity available. Option C requires manual intervention and significant administrative overhead. Option D would negatively impact performance and is not a cost-effective solution for non-critical systems.
112
A company is planning to migrate its on-premises SAP ERP Central Component (SAP ECC) system on SAP HANA to AWS. Each month, the system experiences two peaks in usage. The first peak is on the 21st day of the month when the company runs payroll. The second peak is on the last day of the month when the company processes and exports credit data. Both peak workloads are of high importance and cannot be rescheduled. The current SAP ECC system has six application servers, all of a similar size. During normal operation outside of peak usage, four application servers would suffice. Which purchasing option will meet the company’s requirements MOST cost-effectively on AWS? A. Four Reserved Instances and two Spot Instances B. Six On-Demand Instances C. Six Reserved Instances D. Four Reserved Instances and two On-Demand Instances
D The correct answer is D because it balances cost savings with necessary capacity. Four Reserved Instances cover the base workload, providing a significant cost discount compared to on-demand pricing. Two On-Demand Instances are added only for the two peak usage days, addressing the high-importance, non-reschedulable workloads without the commitment or potential risks associated with Reserved Instances for this variable use. Option A is incorrect because Spot Instances are unpredictable and could be interrupted during peak processing, jeopardizing critical payroll and credit data operations. Option B is too expensive as it continuously uses six instances, even during periods of lower demand. Option C is also unnecessarily expensive since only four instances are required for regular operation.
113
A company hosts an SAP HANA database on an Amazon EC2 instance in the us-east-1 Region. The company needs to implement a disaster recovery (DR) site in the us-west-1 Region. The company needs a cost-optimized solution that offers a guaranteed capacity reservation, an RPO of less than 30 minutes, and an RTO of less than 30 minutes. Which solution will meet these requirements? A. Deploy a single EC2 instance to support the secondary database in us-west-1 with additional storage. Use this secondary database instance to support QA and production. Configure the primary SAP HANA database in us-east-1 to constantly replicate the data to the secondary SAP HANA database in us-west-1 by using SAP HANA system replication with preload off. During DR, shut down the QA SAP HANA instance and restart the production services at the secondary site. B. Deploy a secondary staging server on an EC2 instance in us-west-1. Use CloudEndure Disaster Recovery to replicate changes at the database level from us-east-1 to the secondary staging server on an ongoing basis. During DR, initiate cutover, increase the size of the secondary EC2 instance to match the primary EC2 instance, and start the secondary EC2 instance. C. Set up the primary SAP HANA database in us-east-1 to constantly replicate the data to a secondary SAP HANA database in us-west-1 by using SAP HANA system replication with preload on. Keep the secondary SAP HANA instance as a hot standby that is ready to take over in case of failure. D. Create an SAP HANA database AMI by using Amazon Elastic Block Store (Amazon EBS) snapshots. Replicate the database and log backup files from a primary Amazon S3 bucket in us-east-1 to a secondary S3 bucket in us-west-1. During DR, launch the EC2 instance in us-west-1 based on AMIs that are replicated. Update host information. Download database and log backups from the secondary S3 bucket. Perform a point-in-time recovery.
A The discussion highlights option A as the most likely correct answer. While option B uses CloudEndure, it doesn't guarantee a pre-reserved capacity at the required size; it only promises a smaller instance which needs scaling during DR, potentially delaying RTO. Option C, while using a hot standby, may not be the most cost-effective. Option D, relying on backups and point-in-time recovery, would likely exceed the required RPO and RTO. Option A leverages SAP HANA system replication for near real-time replication, utilizes a single EC2 instance for both QA and production (cost optimization), and allows for a relatively quick failover by shutting down QA. The use of a single instance implies a guaranteed capacity reservation.
114
An SAP technology consultant needs to scale up a primary application server (PAS) instance. The PAS currently runs on a c5a.xlarge Amazon EC2 instance. The SAP technology consultant needs to change the instance type to c5a.2xlarge. How can the SAP technology consultant meet this requirement? A. Stop the complete SAP system. Stop the EC2 instance. Use the AWS Management Console or the AWS CLI to change the instance type. Start the EC2 instance. Start the complete SAP system. B. While SAP is running, use the AWS Management Console or the AWS CLI to change the instance type without stopping the EC2 instance. C. Stop the complete SAP system. Terminate the EC2 instance. Use the AWS Management Console or the AWS CLI to change the instance type. Start the EC2 instance. Start the complete SAP system. D. While SAP is running, log in to the EC2 instance. Run the following AWS CLI command: aws ec2 modify-instance-attribute --instance-id --instance-type "{\"Value\": \"c5a.2xlargel\"}".
A A is correct because changing the instance type of an EC2 instance requires stopping the instance first. Since the PAS is running on the EC2 instance, the SAP system must be stopped before stopping the instance to avoid data corruption or system instability. After changing the instance type, the instance and the SAP system can be restarted. B is incorrect because changing the instance type while the instance is running is generally not supported and may lead to data loss or system failure. C is incorrect because terminating the instance permanently deletes it. The goal is to scale up, not destroy and recreate the instance. D is incorrect because the provided AWS CLI command contains a typo ("c5a.2xlargel" instead of "c5a.2xlarge") and also attempts to modify the instance type while the instance is running, which is unreliable and potentially destructive.
115
An SAP solutions architect is using AWS Systems Manager Distributor to install the AWS Data Provider for SAP on production SAP application servers and SAP HANA database servers. The SAP application servers and the SAP HANA database servers are running on Red Hat Enterprise Linux. The SAP solutions architect chooses instances manually in Systems Manager Distributor and schedules installation. The installation fails with an access and authorization error related to Amazon CloudWatch and Amazon EC2 instances. There is no error related to AWS connectivity. What should the SAP solutions architect do to resolve the error? A. Install the CloudWatch agent on the servers before installing the AWS Data Provider for SAP. B. Download the AWS Data Provider for SAP installation package from AWS Marketplace. Use an operating system super user to install the agent manually or through a script. C. Create an IAM role. Attach the appropriate policy to the role. Attach the role to the appropriate EC2 instances. D. Wait until Systems Manager Agent is fully installed and ready to use on the EC2 instances. Use Systems Manager Patch Manager to perform the installation.
C The correct answer is C because the error message indicates an access and authorization problem with Amazon CloudWatch and Amazon EC2 instances, not a connectivity issue or a missing agent. The AWS Data Provider for SAP needs appropriate IAM permissions to access these services. Creating an IAM role with the necessary permissions and attaching it to the EC2 instances grants the required access. Option A is incorrect because while the CloudWatch agent is used for monitoring, the error is specifically about authorization, not the absence of the agent. Option B is incorrect because it focuses on manual installation, which is not the root cause of the authorization problem. Option D is incorrect because the Systems Manager Agent is unrelated to the access and authorization error; the problem lies in the IAM permissions required by the Data Provider.
116
A company is planning to move to AWS. The company wants to set up sandbox and test environments on AWS to perform proofs of concept (POCs). Development and production environments will remain on premises until the POCs are completed. At the company’s on-premises location, SAProuter is installed on the same server as SAP Solution Manager. The company uses SAP Solution Manager to monitor the entire landscape. The company uses SAProuter to connect to SAP Support. The on-premises SAP Solution Manager instance must monitor the performance and server metrics of the newly created POC systems on AWS. The existing SAProuter must be able to report any issues to SAP. What should an SAP solutions architect do to set up this hybrid infrastructure MOST cost-effectively? A. Install a new SAP Solution Manager instance and a new SAProuter instance in the AWS environment. Connect the POC systems to these new instances. Use these new instances in parallel with the on-premises SAP Solution Manager instance and the on-premises SAProuter instance. B. Install a new SAP Solution Manager instance and a new SAProuter instance in the AWS environment. Install the Amazon CloudWatch agent on all on-premises instances. Push the monitoring data to the new SAP Solution Manager instance. Connect all on-premises systems and POC systems on AWS to the new SAP Solution Manager instance and the new SAProuter instance. Remove the on-premises SAP Solution Manager instance and the on-premises SAProuter instance. Use the new instances on AWS. C. Use AWS Site-to-Site VPN to connect the on-premises network to the AWS environment. Connect the POC systems on AWS to the on-premises SAP Solution Manager instance and the on-premises SAProuter instance. D. Add the POC systems on AWS to the existing SAP Transport Management System that is configured in the on-premises SAP systems.
C The most cost-effective solution is to use AWS Site-to-Site VPN to connect the on-premises network to the AWS environment and connect the POC systems to the existing on-premises SAP Solution Manager and SAProuter. This avoids the cost of setting up and maintaining additional SAP Solution Manager and SAProuter instances in AWS. Options A and B involve unnecessary additional infrastructure costs. Option D is irrelevant to monitoring and connecting to SAP Support.
117
A company has moved all of its SAP workloads to AWS. During peak business hours, end users are reporting performance issues because work processes are going into PRIV mode on an SAP S/4HANA system. An SAP support engineer indicates that SAP cannot provide support for this issue because some specific performance metrics are not available. Which combination of actions must the company perform to comply with SAP support requirements? (Choose three.) A. Buy an SAP license from AWS. Ensure that the SAP license is installed. B. Select only an AWS Migration Acceleration Program (MAP) certified managed service provider (MSP). C. Enable detailed monitoring for Amazon CloudWatch on each Amazon EC2 instance where SAP workloads are running. D. Install, configure, and run the AWS Data Provider for SAP on each Amazon EC2 instance where SAP workloads are running. E. Integrate AWS Systems Manager with SAP Solution Manager to provide alerts about SAP parameter configuration drift. F. Enable SAP enhanced monitoring through a SAPOSCOL enhanced function.
C, D, F The discussion section overwhelmingly indicates that options C, D, and F are the correct choices. These options address the core problem: the lack of sufficient performance metrics for SAP support. * **C. Enable detailed monitoring for Amazon CloudWatch on each Amazon EC2 instance where SAP workloads are running:** This provides the necessary performance data that SAP support requires to troubleshoot the issue. * **D. Install, configure, and run the AWS Data Provider for SAP on each Amazon EC2 instance where SAP workloads are running:** This ensures that relevant SAP performance data is collected and readily available for analysis. * **F. Enable SAP enhanced monitoring through a SAPOSCOL enhanced function:** This enhances the monitoring capabilities within the SAP system itself, providing further crucial data to SAP support. Options A, B, and E are incorrect because: * **A. Buy an SAP license from AWS. Ensure that the SAP license is installed:** This is unrelated to the lack of performance metrics. The company already has SAP licenses; the problem is monitoring, not licensing. * **B. Select only an AWS Migration Acceleration Program (MAP) certified managed service provider (MSP):** While helpful for migrations, this doesn't directly solve the problem of missing performance metrics for SAP support. * **E. Integrate AWS Systems Manager with SAP Solution Manager to provide alerts about SAP parameter configuration drift:** This addresses configuration issues, but not the immediate need for performance data to resolve the reported performance problem.
118
A company needs to implement high availability for its SAP S/4HANA system on AWS. The company will use a SUSE Linux Enterprise Server clustering solution in private subnets across two Availability Zones. An SAP solutions architect must ensure that the solution can route traffic to the active SAP instance in this clustered configuration. What should the SAP solutions architect do to meet these requirements? A. Implement the SAP cluster solution by using a secondary private IP address. Reassign the secondary private IP address from one network interface to another network interface in the event of any failure that affects the primary instance. B. Implement the SAP cluster solution by using an Elastic IP address. Mask the failure of an instance or software by rapidly remapping the address to another instance in the account. C. Implement the SAP cluster solution by using a public IP address. Use this public IP address for communication between the instances and the internet. D. Implement the SAP cluster solution by using an overlay IP address that is outside the CIDR block of the VPC. Use overlay IP address routing to dynamically update the route table to point to the active node and provide external access by using a Network Load Balancer or AWS Transit Gateway.
D The correct answer is D because using an overlay IP address outside the VPC's CIDR block, combined with dynamic route table updates, provides the necessary high availability for the SAP S/4HANA system. This setup allows for seamless failover to the active node in the cluster without disrupting external access. Option A is incorrect because reassigning a secondary private IP address manually is not a scalable or automated solution for high availability. Option B is incorrect because while Elastic IPs can be remapped, this process is not integrated with the cluster's failover mechanism and may introduce downtime. Option C is incorrect because using a public IP address for inter-instance communication within a private subnet is not best practice and increases security risks.
119
An SAP specialist is building an SAP environment. The SAP environment contains Amazon EC2 instances that run in a private subnet in a VPC. The VPC includes a NAT gateway. The SAP specialist is setting up IBM Db2 high availability disaster recovery for the SAP cluster. After configuration of overlay IP address routing, traffic is not routing to the database EC2 instances. What should the SAP specialist do to resolve this issue? A. Open a security group for SAP ports to allow traffic on port 443. B. Create route table entries to allow traffic from the database EC2 instances to the NAT gateway. C. Turn off the source/destination check for the database EC2 instances. D. Create an IAM role that has permission to access network traffic. Associate the role with the database EC2 instances.
C
120
A company wants to migrate its SAP landscape from on premises to AWS. What are the MINIMUM requirements that the company must meet to ensure full support of SAP on AWS? (Choose three.) A. Enable detailed monitoring for Amazon CloudWatch on each instance in the landscape. B. Deploy the infrastructure by using SAP Cloud Appliance Library. C. Install, configure, and run the AWS Data Provider for SAP on each instance in the landscape. D. Protect all production instances by using Amazon EC2 automatic recovery. E. Deploy the infrastructure for the SAP landscape by using AWS Launch Wizard for SAP. F. Deploy the SAP landscape on an AWS account that has either an AWS Business Support plan or an AWS Enterprise Support plan.
A, C, F
121
A company wants to improve the RPO and RTO for its SAP disaster recovery (DR) solution by running the DR solution on AWS. The company is running SAP ERP Central Component (SAP ECC) on SAP HANA. The company has set an RPO of 15 minutes and an RTO of 4 hours. The production SAP HANA database is running on a physical appliance that has x86 architecture. The appliance has 1 TB of memory, and the SAP HANA global allocation limit is set to 768 GB. The SAP application servers are running as VMs on VMware, and they store data on an NFS file system. The company does not want to change any existing SAP HANA parameters that are related to data and log backup for its on-premises systems. What should an SAP solutions architect do to meet the DR objectives MOST cost-effectively? A. For the SAP HANA database, change the log backup frequency to 5 minutes. Move the data and log backups to Amazon S3 by using the AWS CLI or AWS DataSync. Launch the SAP HANA database. For the SAP application servers, export the VMs as AMIs by using the VM Import/Export feature from AWS. For NFS file shares /sapmnt and /usr/sap/trans, establish real-time synchronization from DataSync to Amazon Elastic File System (Amazon EFS). B. For the SAP HANA database, change the log backup frequency to 5 minutes. Move the data and log backups to Amazon S3 by using AWS Storage Gateway File Gateway. For the SAP application servers, export the VMs as AMIs by using the VM Import/Export feature from AWS. For NFS file shares /sapmnt and /usr/sap/trans, establish real-time synchronization from AWS DataSync to Amazon Elastic File System (Amazon EFS). C. For the SAP HANA database, SAP application servers, and NFS file shares, use CloudEndure Disaster Recovery to replicate the data continuously from on premises to AWS. Use CloudEndure Disaster Recovery to launch target instances in the event of a disaster. D. For the SAP HANA database, use a smaller SAP certified Amazon EC2 instance. Use SAP HANA system replication with ASYNC replication mode to replicate the data continuously from on premises to AWS. For the SAP application servers, use CloudEndure Disaster Recovery for continuous data replication. For NFS file shares /sapmnt and /usr/sap/trans, establish real-time synchronization from AWS DataSync to Amazon Elastic File System (Amazon EFS).
D The best option is D because it meets the RPO and RTO requirements cost-effectively without requiring changes to existing SAP HANA backup parameters. Option D utilizes asynchronous replication (ASYNC) for the SAP HANA database, which is suitable for achieving a 15-minute RPO. CloudEndure is used for the application servers, providing continuous replication. Finally, DataSync ensures real-time synchronization of the NFS file shares. Options A and B are incorrect because they require changing the SAP HANA log backup frequency, violating the requirement to not modify existing parameters. Option C is less cost-effective because CloudEndure replication for the entire environment, including the database, may be more expensive than using a combination of ASYNC replication and CloudEndure for only the application servers. Additionally, the consistency of the database replication with CloudEndure might not be guaranteed to meet the specified RPO.
122
A company is migrating its SAP landscape from on-premises VMware to AWS. The company has already migrated its sandbox, development, and QA systems to AWS, but its production system remains on-premises. They need a solution to synchronize shared file systems (including `/trans`, `/software`, and third-party integration mounts) between the on-premises NFS environment and their AWS Amazon EFS file system. The synchronization must be bidirectional, occur four times daily, and be encrypted. Which solution best meets these requirements? A. Write an rsync script scheduled via cron on the on-premises VMware servers to transfer data from on-premises to AWS. B. Install an AWS DataSync agent on the on-premises VMware platform and use the DataSync endpoint to synchronize between the on-premises NFS server and Amazon EFS on AWS. C. Order an AWS Snowcone device to transfer data between the on-premises servers and AWS. D. Set up a separate AWS Direct Connect connection for synchronization between the on-premises servers and AWS.
B B is correct because AWS DataSync is specifically designed for efficient and secure data transfer between on-premises systems and AWS services, including NFS and Amazon EFS. It supports scheduling and encryption, fulfilling all the requirements. A is incorrect because rsync, while useful for file synchronization, lacks the built-in features for scheduling and encryption required for this scenario at the scale and frequency needed. It would require significant custom scripting and potentially additional tools to meet all the requirements. C is incorrect because AWS Snowcone is suitable for transferring large amounts of data offline but is not appropriate for frequent, scheduled, bidirectional synchronization. It is not a real-time solution. D is incorrect because while AWS Direct Connect provides a dedicated network connection, it doesn't directly handle file synchronization. It would require additional software and configuration to achieve the necessary functionality.
123
A company is running its SAP applications on Oracle Database. Oracle Database is hosted on physical servers that are running SUSE Linux Enterprise Server. Because of compliance requirements, the company cannot install any additional software on its on-premises database servers. The company needs to migrate the SAP landscape to AWS and must continue to use Oracle Database. Which migration solution should the company use to meet these requirements? A. AWS Server Migration Service (AWS SMS) B. AWS Application Migration Service (CloudEndure Migration) C. SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move D. Oracle Database replication with Oracle Data Guard
D
124
A company wants to migrate its on-premises servers to AWS. These servers include SAP ERP Central Component (SAP ECC) on Oracle Database. The company is running SAP ECC application servers and Oracle Database servers on AIX. The company must migrate the SAP workloads to AWS with minimal changes. Which solution will meet these requirements? A. Perform a heterogeneous migration for SAP on AWS. Specify the SAP ECC application servers to run on SUSE Linux Enterprise Server. Specify Oracle Database to run on Oracle Enterprise Linux on a Dedicated Host. B. Perform a homogeneous migration for SAP on AWS. Specify the SAP ECC application servers and Oracle Database to run on AIX. C. Perform a heterogeneous migration for SAP on AWS. Specify the SAP ECC application servers and Oracle Database to run on Oracle Enterprise Linux. D. Perform a heterogeneous migration for SAP on AWS. Specify the SAP ECC application servers and Oracle Database to run on Windows.
C The correct answer is C because: AWS does not support AIX. Oracle requires Oracle Linux for both the database and SAP NetWeaver application servers. Option C fulfills this requirement with a heterogeneous migration (changing the OS from AIX to Oracle Linux), minimizing changes while maintaining compatibility. Option A is incorrect because while using a dedicated host can sometimes simplify migration, it's not necessary in this scenario and doesn't address the core incompatibility of AIX on AWS. Option B is incorrect because AIX is not supported on AWS. Option D is incorrect because Windows is not the recommended or supported operating system for this SAP and Oracle Database configuration.
125
A company uses Amazon EFS to store media files, database table exports/imports, and files from third-party tools for its SAP applications deployed across multiple Availability Zones in the same AWS Region. These EFS file systems have grown to multiple terabytes. The company needs to retrieve files quickly for installations, updates, and system refreshes, but wants to optimize storage costs with the LEAST administrative overhead. Which solution best meets this requirement? A. Manually scan files to identify and delete unnecessary files. B. Move the files to Amazon S3 Glacier Deep Archive. C. Apply a lifecycle policy on the Amazon EFS files to move them to EFS Standard-Infrequent Access (Standard-IA). D. Move the files to Amazon S3 Glacier. Apply an S3 Glacier vault lock policy to the files.
C C is correct because it automates the process of moving infrequently accessed files to a cheaper storage tier (EFS Standard-IA) within the EFS file system. This minimizes administrative overhead compared to manually scanning and deleting files (A), which is time-consuming and error-prone. Options B and D involve moving data to S3 Glacier and Glacier Deep Archive, respectively, which are designed for archival storage with slower retrieval times, directly conflicting with the requirement for quick access during installations, updates, and system refreshes. The Glacier vault lock in option D further restricts access and is inappropriate for files needing frequent retrieval.
126
A company wants to migrate its SAP S/4HANA software from on premises to AWS in a few weeks. An SAP solutions architect plans to use AWS Launch Wizard for SAP to automate the SAP deployment on AWS. Which combination of steps must the SAP solutions architect take to use Launch Wizard to meet these requirements? (Choose two.) A. Download the SAP software files from the SAP Support Portal. Upload the SAP software files to Amazon S3. Provide the S3 bucket path as an input to Launch Wizard. B. Provide the SAP S-user ID and password as inputs to Launch Wizard to download the software automatically. C. Format the S3 file path syntax according to the Launch Wizard deployment recommendation. D. Use an AWS CloudFormation template for the automated deployment of the SAP landscape. E. Provision Amazon EC2 instances. Tag the instances to install SAP S/4HANA on them.
A, C The correct answers are A and C because the AWS Launch Wizard for SAP requires the SAP software to be pre-uploaded to an Amazon S3 bucket. This involves downloading the software from the SAP Support Portal (option A) and uploading it to S3. The S3 file path must then be formatted correctly according to Launch Wizard's specifications (option C). Option B is incorrect because the Launch Wizard doesn't automatically download the software using S-user credentials. Option D is incorrect because while CloudFormation can deploy SAP landscapes, the question specifically asks about using the Launch Wizard. Option E is incorrect because the Launch Wizard handles EC2 instance provisioning as part of its automated deployment process; manually provisioning them beforehand is unnecessary and contradicts the automation goal.
127
A company is running SAP on anyDB at a remote location with slow and inconsistent internet connectivity. They want to migrate their system to AWS and convert their database to SAP HANA. Due to the inconsistent internet connection, they haven't established connectivity between the remote location and their AWS VPC. How should the company perform this migration? A. Migrate by using SAP HANA system replication over the internet connection. Specify a public IP address on the target system. B. Migrate by using SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move. Use an AWS Snowball Edge Storage Optimized device to transfer the SAP export files to AWS. C. Migrate by using SAP HANA system replication with initialization through backup and restore. Use an AWS Snowball Edge Storage Optimized device to transfer the SAP export files to AWS. D. Migrate by using SAP Software Update Manager (SUM) DMO with System Move. Use Amazon Elastic File System (Amazon EFS) to transfer the SAP export files to AWS.
B The correct answer is B because it addresses the unreliable internet connectivity by using AWS Snowball Edge for physical data transfer. SUM DMO with System Move efficiently migrates and converts the database to SAP HANA in a single step, minimizing downtime. Option A is incorrect because relying on an unreliable internet connection for SAP HANA system replication is risky and likely to fail. Option C is incorrect because while using Snowball Edge is a good approach for the network limitations, using system replication with initialization through backup and restore is less efficient than using SUM DMO with System Move for database conversion. Option D is incorrect because Amazon EFS is a network file system and wouldn't solve the problem of unreliable internet connectivity between the remote location and AWS. Using Snowball Edge, a physical device, is necessary in this scenario.
128
A financial services company is implementing SAP core banking on AWS. The company must not allow any system information to traverse the public internet. The company needs to implement secure monitoring of its SAP ERP Central Component (SAP ECC) system to check for performance issues and faults in its application. The solution must maximize security and must be supported by SAP and AWS. How should the company integrate AWS metrics with its SAP system to meet these requirements? A. Set up SAP Solution Manager to call Amazon CloudWatch and Amazon EC2 endpoints with REST-based calls to populate SAPOSCOL details. Use SAP transaction ST06N to monitor CPU and memory utilization on each EC2 instance. B. Install the AWS Data Provider for SAP on the Amazon EC2 instances that host SAP. Allow access to the Amazon CloudWatch and EC2 endpoints through a NAT gateway. Create an IAM policy that allows the ec2:DescribeInstances action, the cloudwatch:GetMetricStatistics action, and the ec2:DescribeVolumes action for all EC2 resources. C. Install the AWS Data Provider for SAP on the Amazon EC2 instances that host SAP. Create VPC endpoints for Amazon CloudWatch and Amazon EC2. Allow access through these endpoints. Create an IAM policy that allows the ec2:DescribeInstances action, the cloudwatch:GetMetricStatistics action, and the ec2:DescribeVolumes action for all EC2 resources. D. Install the AWS Data Provider for SAP on the Amazon EC2 instances that host SAP. Create VPC endpoints for Amazon CloudWatch and Amazon EC2. Allow access through these endpoints. Create an IAM policy that allows all actions for all EC2 resources.
C The correct answer is C because it uses VPC endpoints for Amazon CloudWatch and Amazon EC2. This ensures that no traffic leaves the Virtual Private Cloud (VPC) and therefore does not traverse the public internet, fulfilling the key security requirement. The IAM policy is correctly scoped to only grant the necessary permissions (following the principle of least privilege), preventing unnecessary access. Option A is incorrect because it uses REST calls to public endpoints, violating the requirement of not allowing any system information to traverse the public internet. Option B is incorrect because it uses a NAT gateway, which, while improving security compared to direct public access, is less secure than VPC endpoints. Option D is incorrect because it grants "all actions" for all EC2 resources, violating the principle of least privilege and significantly increasing security risk. This is a far broader permission set than necessary.
129
A company is migrating its SAP workloads to AWS. The company’s IT team installs a highly available SAP S/4HANA system that uses the SAP HANA system replication cluster package on SUSE Linux Enterprise Server. The system is deployed using cluster nodes in different Availability Zones within the same AWS Region. After the initial launch of the SAP application, the application is accessible. However, after failover, the IT team cannot access the application even though the system is up and running on the secondary node. After investigation, an SAP solutions architect discovers that the virtual IP address has not been used correctly. Which combination of steps should the SAP solutions architect take to resolve this problem? (Choose two.) A. Use an overlay IP address as a secondary IP address with the primary node of the cluster. B. Choose an overlay IP address within the VPC CIDR block that corresponds with the secondary node of the cluster. C. Use an overlay IP address as a virtual IP address. D. Choose an overlay IP address within the VPC CIDR block that corresponds with the primary node of the cluster. E. Choose an overlay IP address outside the VPC CIDR block that hosts the application and the database.
C and E The correct answer is C and E because: * **C. Use an overlay IP address as a virtual IP address:** This is crucial for high availability. The virtual IP address should be configured to float between the primary and secondary nodes. When a failover occurs, the virtual IP address automatically moves to the secondary node, ensuring continuous application access. * **E. Choose an overlay IP address outside the VPC CIDR block that hosts the application and the database:** This avoids IP address conflicts within the VPC. Using an IP address outside the VPC's CIDR range prevents potential clashes with other resources and ensures the virtual IP is uniquely addressable. Option A is incorrect because a secondary IP address on the primary node doesn't solve the failover access issue; the VIP needs to switch to the secondary node. Option B is incorrect because the VIP should not be associated with only one node (primary or secondary). Option D is also incorrect for the same reason; it ties the VIP to a specific node and thus negates the high availability setup.
130
A company runs its SAP ERP 6.0 EHP 8 system on SAP HANA on AWS. The system is deployed on an r4.16xlarge Amazon EC2 instance with default tenancy. The company needs to migrate the SAP HANA database to an x2gd.16xlarge High Memory instance. After an operations engineer changes the instance type and starts the instance, the AWS Management Console shows a failed instance status check. What is the cause of this problem? A. The operations engineer missed the network configuration step during the post-migration activities. B. The operations engineer missed the Amazon CloudWatch configuration step during the post-migration activities. C. The operations engineer did not install Elastic Network Adapter (ENA) drivers before changing the instance type. D. The operations engineer did not create a new AMI from the original instance and did not launch a new instance with dedicated tenancy from the AMI.
C