test1 Flashcards

(75 cards)

1
Q

Case Study

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview

Contoso, Ltd. is a manufacturing company that has 15,000 employees.

The company uses SAP for sales and manufacturing.

Contoso has sales offices in New York and London and manufacturing facilities in Boston and Seattle.

Existing Environment

Active Directory

The network contains an on-premises Active Directory domain named ad.contoso.com. User email addresses use a domain name of contoso.com.

SAP Environment

The current SAP environment contains the following components:

  • SAP Solution Manager
  • SAP ERP Central Component (SAP ECC)
  • SAP Supply Chain Management (SAP SCM)
  • SAP application servers that run Windows Server 2008 R2
  • SAP HANA database servers that run SUSE Linux Enterprise Server 12 (SLES 12)

Problem Statements

Contoso identifies the following issues in its current environment:

  • The SAP HANA environment lacks adequate resources.
  • The Windows servers are nearing the end of support.
  • The datacenters are at maximum capacity.

Requirements

Planned Changes

Contoso identifies the following planned changes:

  • Deploy Azure Virtual WAN.
  • Migrate the application servers to Windows Server 2016.
  • Deploy ExpressRoute connections to all of the offices and manufacturing facilities.
  • Deploy SAP landscapes to Azure for development, quality assurance, and production.

All resources for the production landscape will be in a resource group named SAP Production.

Business goals

Contoso identifies the following business goals:

  • Minimize costs whenever possible.
  • Migrate SAP to Azure without causing downtime.
  • Ensure that all SAP deployments to Azure are supported by SAP.
  • Ensure that all the production databases can withstand the failure of an Azure region.
  • Ensure that all the production application servers can restore daily backups from the last 21 days.

Technical Requirements

Contoso identifies the following technical requirements:

  • Inspect all web queries.
  • Deploy an SAP HANA cluster to two datacenters.
  • Minimize the bandwidth used for database synchronization.
  • Use Active Directory accounts to administer Azure resources.
  • Ensure that each production application server has four 1-TB data disks.
  • Ensure that an application server can be restored from a backup created during the last five days within 15 minutes.
  • Implement an approval process to ensure that an SAP administrator is notified before another administrator attempts to make changes to the Azure virtual machines that host SAP.

It is estimated that during the migration, the bandwidth required between Azure and the New York office will be 1 Gbps. After the migration, a traffic burst of up to 3 Gbps will occur.

Proposed Backup Policy

An Azure administrator proposes the backup policy shown in the following exhibit.
Policy name:
✅ SapPolicy

Backup schedule
Frequency: Daily
Time: 3:30 AM
Timezone: (UTC) Coordinated Universal Time
Instant Restore
Retain instant recovery snapshot(s) for 5 Day(s)
Retention range
✅ Retention of daily backup point

At: 3:30 AM
For: 14 Day(s)
✅ Retention of weekly backup point

On: Sunday
At: 3:30 AM
For: 8 Week(s)
✅ Retention of monthly backup point

Week Based - Day Based
On: First Sunday
At: 3:30 AM
For: 12 Month(s)
✅ Retention of yearly backup point

Week Based - Day Based
In: January
On: First Sunday
At: 3:30 AM
For: 7 Year(s)

An Azure administrator provides you with the Azure Resource Manager template that will be used to provision the production application servers.
{
“apiVersion”: “2017-03-30”,
“type”: “Microsoft.Compute/virtualMachines”,
“name”: “[parameters(‘vmname’)]”,
“location”: “EastUS”,
“dependsOn”: [
“[resourceId(‘Microsoft.Network/networkInterfaces/’, parameters(‘vmname’))]”
],
“properties”: {
“hardwareProfile”: {
“vmSize”: “[parameters(‘vmSize’)]”
},
“osProfile”: {
“computerName”: “[parameters(‘vmname’)]”,
“adminUsername”: “[parameters(‘adminUsername’)]”,
“adminPassword”: “[parameters(‘adminPassword’)]”
},
“storageProfile”: {
“ImageReference”: {
“publisher”: “MicrosoftWindowsServer”,
“offer”: “WindowsServer”,
“sku”: “2016-datacenter”,
“version”: “latest”
},
“osDisk”: {
“name”: “[concat(parameters(‘vmname’), ‘-OS’)]”,
“caching”: “ReadWrite”,
“createOption”: “FromImage”,
“diskSizeGB”: 128,
“managedDisk”: {
“storageAccountType”: “[parameters(‘storageAccountType’)]”
}
},
“copy”: [
{
“name”: “DataDisks”,
“count”: “[parameters(‘diskCount’)]”,
“input”: {
“caching”: “None”,
“diskSizeGB”: 1024,
“lun”: “[copyIndex(‘datadisks’)]”
}
}
]
}
}
}

Topic 3, Misc. Questions

You plan to migrate an SAP HANA instance to Azure.

You need to gather CPU metrics from the last 24 hours from the instance.

Solution: You query views from SAP HANA Studio.

Does this meet the goal?

Yes
No

A

Correct Answer: Yes
Why It’s Correct:
SAP HANA Studio is a supported tool for monitoring and managing SAP HANA instances, including gathering performance metrics like CPU usage.
System views in SAP HANA (e.g., M_HOST_RESOURCE_UTILIZATION) typically store resource utilization data, including CPU metrics, for at least 24 hours in a standard configuration, which meets the goal of gathering this data for migration planning.
The AZ-120 exam tests knowledge of SAP HANA administration and Azure migration preparation. Using SAP HANA Studio to query performance metrics is a practical and SAP-supported approach for an on-premises environment, making it the closest correct answer.
No Azure-specific tools are required at this stage since the instance is still on-premises, and the question focuses on gathering metrics from the existing SAP HANA instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You plan to migrate an SAP HANA instance to Azure.

You need to gather CPU metrics from the last 24 hours from the instance.

Solution: You run SAP HANA Quick Sizer.

Does this meet the goal?

Yes
No

A

Correct Answer: No
Why It’s Correct:
SAP HANA Quick Sizer is designed to calculate resource requirements (e.g., CPU, memory) for an SAP HANA deployment based on business workload inputs, not to gather or analyze historical performance data like CPU metrics from the last 24 hours.
The goal requires actual CPU usage data from the running SAP HANA instance, which Quick Sizer cannot provide. Tools like SAP HANA Studio, SAP HANA Cockpit, or OS-level monitoring are needed instead.
For the AZ-120 exam, understanding the distinction between sizing tools (like Quick Sizer) and monitoring tools (like SAP HANA Studio) is key. Since the solution does not align with the requirement, No is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You have an on- premises SAP environment hosted on VMware VSphere that in Microsoft SQL Server as the database platform. You plan to migrate the environment to Azure. The database platform will remain the same. You need gather information to size the target Azure Environment for the migration.

What should you use?

Azure Monitor
the SAP NANA sizing report
the SAP EarlyWatch Alert report
Azure Advisor

A

Correct Answer: The SAP EarlyWatch Alert report
Why It’s Correct:
The SAP EarlyWatch Alert report provides detailed performance and resource utilization data (e.g., CPU, memory, database IOPS) from the existing on-premises SAP environment, including the Microsoft SQL Server database. This data is critical for sizing the target Azure environment, such as selecting appropriate Azure VM types (e.g., E-series for SQL Server) and storage configurations (e.g., Premium SSD).
Unlike Azure Monitor and Azure Advisor, which are Azure-specific and require the workload to be in Azure, EarlyWatch Alert works with the on-premises system. The SAP HANA sizing report is irrelevant because the database is SQL Server, not HANA.
For the AZ-120 exam, the EarlyWatch Alert report is a recognized tool for SAP migration planning, making it the closest and most correct answer for gathering sizing information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You plan to migrate an SAP HANA instance to Azure.

You need to gather CPU metrics from the last 24 hours from the instance.

Solution: You use Monitoring from the SAP HANA Cockpit.

Does this meet the goal?

Yes
No

A

Correct Answer: Yes
Why It’s Correct:
SAP HANA Cockpit’s Monitoring feature allows administrators to view historical CPU metrics, typically covering at least the last 24 hours in a standard configuration, meeting the goal of gathering this data for migration planning.
It is a native SAP HANA tool, well-suited for monitoring an on-premises instance, and does not require Azure-specific tools since the system has not yet been migrated.
For the AZ-120 exam, understanding how to use SAP HANA Cockpit for performance monitoring is relevant to migration preparation, as it provides the data needed to size Azure VMs (e.g., M-series for SAP HANA). Thus, Yes is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

HOTSPOT

For each of the following statements, select Yes if the stamen is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No
Oracle Real Application Clusters (RAC) can be used to provide high availability of SAP databases on Azure.
You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016.
You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12).

A

Summary of Answers
Statement 1: No (Oracle RAC is not supported for SAP on Azure)
Statement 2: Yes (Oracle on Windows Server 2016 is supported)
Statement 3: Yes (Oracle on SLES 12 is supported)
Why These Are Correct
Statement 1 (No): Oracle RAC’s lack of support on Azure for SAP workloads is a key distinction in the AZ-120 exam. The preference for Data Guard aligns with Azure’s architecture and SAP’s certification requirements.
Statement 2 (Yes): Windows Server 2016 is a fully supported OS for Oracle databases in Azure SAP deployments, reflecting flexibility in OS choices for SAP customers.
Statement 3 (Yes): SLES 12 is widely used and certified for SAP-on-Oracle deployments, making it a standard option in Azure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company named Contoso, Ltd. has users across the globe. Contoso is evaluating whether to migrate SAP to Azure.

The SAP environment runs on SUSE Linux Enterprise Server (SLES) servers and SAP HANA databases.

The Suite on HANA database is 4 TB.

You need to recommend a migration solution to migrate SAP application servers and the SAP HANA databases. The solution must minimize downtime.

Which migration solutions should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Answer Area

SAP application servers:
⏷ AzCopy
⏷ Azure Site Recovery
⏷ SAP HANA system replication
⏷ System Copy for SAP Systems

SAP HANA databases:
⏷ AzCopy
⏷ Azure Site Recovery
⏷ SAP HANA system replication
⏷ System Copy for SAP Systems

A

Correct Answer:
SAP application servers: Azure Site Recovery
SAP HANA databases: SAP HANA system replication

Recommended Solutions:
SAP Application Servers: Azure Site Recovery
Why Correct:
ASR supports migrating SLES-based VMs (SAP application servers) from on-premises to Azure with continuous replication and a planned failover, minimizing downtime to a brief cutover window.
It’s widely used for SAP application server migrations in Azure and aligns with AZ-120 exam objectives for VM-based migrations.
Unlike AzCopy or System Copy, ASR avoids lengthy offline data transfers, and SAP HANA system replication is irrelevant for application servers.
SAP HANA Databases: SAP HANA system replication
Why Correct:
SAP HANA system replication enables near-zero downtime migration by replicating the 4 TB database to an Azure-based HANA instance (e.g., on an M-series VM). Once replication is complete, a quick switchover minimizes downtime.
It’s SAP-certified, supports large databases, and is recommended by SAP and Microsoft for HANA migrations to Azure with minimal disruption.
ASR isn’t suitable for database consistency, AzCopy requires significant downtime, and System Copy involves prolonged outages, making HANA system replication the best choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your company has a an on-premises SAP environment.

Recently, the company split into two companies named Litware, inc and Contoso.Ltd. Litware retained the SAP environment.

Litware plans to export data that is relevant only to Contoso. The export will be 1.5 TB.

Contoso build a new SAP environment on Azure.

You need to recommend a solution for Litware to make the data available to Contoso in Azure.

The solution must meet the following requirements:

  • Minimize the impact on the network.
  • Minimize the administrative effort for Litware.

What should you include in the recommendation.

Azure Migrate
Azure Databox
Azure Site Recovery
Azure import/Export service

A

Recommended Solution: Azure Data Box
Correct Answer: Azure Data Box
Reasoning:
Minimizes Network Impact: The 1.5 TB of data is transferred offline via a physical device, avoiding the need for a high-bandwidth internet transfer that could strain Litware’s network.
Minimizes Administrative Effort: Microsoft ships the Data Box to Litware, who only needs to export the SAP data to the device using standard tools (e.g., rsync or SMB). Once shipped back, Azure uploads the data to a storage account, requiring no further management from Litware. This is simpler than the Import/Export Service, which involves managing disks.
SAP Context: For SAP environments, exporting data (e.g., database dumps or file extracts) to a device like Data Box is a practical approach, especially for a one-time transfer of 1.5 TB. Contoso can then access the data from Azure storage and import it into their SAP system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You are building an SAP environment by using Azure Resource Manager templates. The SAP environment will use Linux virtual machines.

You need to correlate the LUN of the data disks in the template to the volume of the virtual machines.

Which command should you run/

Is /dev/ disk/azure/root
Is /dev/ disk/azure/scsil
Tree /dev/ disk/azure/root
Tree /dev/disk/azure/resource

A

Correct Answer: ls /dev/disk/azure/scsil (interpreted as ls /dev/disk/azure/scsi1)
Why It’s Correct:
Assuming “scsil” is a typo for scsi1, the command ls /dev/disk/azure/scsi1 lists the symbolic links for data disks (e.g., lun0 → /dev/sdc), allowing you to correlate the LUNs specified in the ARM template (e.g., lun: 0) to the device names on the Linux VM. This is the standard method on Azure Linux VMs for disk identification.
The other options either target the wrong path (root), use an invalid path (resource), or rely on a non-standard tool (tree) that doesn’t align with typical SAP-on-Azure administration.
For the AZ-120 exam, this aligns with the need to manage disk configurations for SAP workloads, making it the closest correct answer despite the apparent typo. If “scsil” is not a typo and no correction is intended, none of the options are perfectly valid, but scsil remains the closest match to the intended solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

This question requires that you evaluate the underlined text to determine if it is correct.

You have an SAP environment on Azure that uses Microsoft SQL server as the RDBMS.

You plan to migrate to an SAP HANA database.

To calculate the amount of memory and disk space required for the database, you can use SAP Quick Sizer.

Instructions: Review the BOLD text, If the makes the stamen correct, select ‘’No change is needed. “ if the statement is incorrect select the answer choice that makes the statement correct.

No change is needed.
Azure Migrate
/SDF/HDB_SIZING
SQL Server Management Studio (SSMS)

A

Final Answer
Selection: No change is needed
Reason: SAP Quick Sizer is a valid and correct tool for calculating memory and disk space requirements for an SAP HANA database in this Azure migration scenario, aligning with AZ-120 exam objectives.

Correct Answer: No change is needed
Why Correct: The bolded text, SAP Quick Sizer, makes the statement correct. SAP Quick Sizer is a widely recognized SAP tool for sizing SAP HANA systems, including during migrations, and is appropriate for estimating memory and disk space based on workload inputs. The AZ-120 exam often emphasizes SAP planning tools like Quick Sizer for infrastructure sizing on Azure, and the statement aligns with this context. While /SDF/HDB_SIZING is a more precise tool for analyzing an existing database during a migration, the question’s phrasing (“you can use”) doesn’t mandate the most specific tool—Quick Sizer is sufficient and correct.
Why Others Are Incorrect:
Azure Migrate: Not relevant for database sizing or SAP HANA.
/SDF/HDB_SIZING: More specific but not required by the statement’s broad wording.
SSMS: Lacks SAP HANA sizing capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You are deploying an SAP production landscape to Azure.

Your company’s chief information security officer (CISO) requires that the SAP deployment complies with ISO 27001.

You need to generate a compliance report for ISO 27001.

What should you use?

Azure Security Center
Azure Log Analytics
Azure Active Directory (Azure AD)
Azure Monitor

A

Correct Answer: Azure Security Center

Reasoning:

ISO 27001 Compliance: ISO 27001 is an international standard for information security management systems (ISMS). It requires organizations to assess and manage risks, implement controls, and demonstrate compliance through auditable reports. Azure Security Center (Microsoft Defender for Cloud) provides a Regulatory Compliance Dashboard that specifically supports tracking and reporting compliance with standards like ISO 27001.
Functionality: Azure Security Center offers built-in tools to assess the security posture of your Azure resources, map them to ISO 27001 controls, and generate compliance reports. It includes features like continuous monitoring, security recommendations, and the ability to export compliance data, which are essential for meeting the CISO’s requirement to generate a report.
SAP on Azure Context: For an SAP deployment, which involves virtual machines, databases (e.g., SAP HANA), and networking components, Azure Security Center can evaluate the security and compliance of these resources against ISO 27001 standards. This is critical for a production landscape where security and compliance are paramount.
Exam Relevance (AZ-120): The AZ-120 exam (“Planning and Administering Microsoft Azure for SAP Workloads”) focuses on managing SAP workloads on Azure, including security and compliance. Azure Security Center is a key tool covered in this context for ensuring compliance with standards like ISO 27001.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A customer enterprise SAP environment plans to migrate to Azure. The environment uses servers that runs Windows Server 2016 and Microsoft SQL Server.

The environment is critical and requires a comprehensive business continuity and disaster recovery (BCDRJ strategy that minimizes the recovery point objective (RPO) and the recovery time objective (RTO).

The customer wants a resilient environment that has a secondary site that is at least 250 Kilometers away. You need to recommend a solution for the customer.

Which two solutions should you recommend? Each correct answer presents part of the solution. NOTE; Each correct selection Is worth one point.

an internal load balancer to route Internet traffic
warm standby virtual machines in Azure Availability Zones.
warn standby virtual machines in paired regions
Warm standby virtual machine an Azure Availability Set that uses geo-redundant storage (GRS)
Azure Traffic Manager to route incoming traffic.

A

Final Answer:
Warm standby virtual machines in paired regions: Provides the DR foundation with a secondary site 250 km away, minimizing RPO and RTO through pre-deployed VMs.
Azure Traffic Manager to route incoming traffic: Ensures business continuity by seamlessly redirecting traffic to the active site (primary or secondary), complementing the DR setup.

Azure Traffic Manager to route incoming traffic
Why Correct?
Traffic Routing: Azure Traffic Manager is a DNS-based traffic routing service that can direct user traffic to the primary site under normal conditions and failover to the secondary site (in the paired region) during a disaster. This ensures seamless redirection, minimizing RTO.
Global Resilience: It supports SAP environments by providing a single endpoint for clients, routing them to the active site (primary or secondary), which is critical for maintaining business continuity.
Complementary to Warm Standby: When paired with warm standby VMs in a secondary region, Traffic Manager ensures that once failover occurs, users are automatically directed to the secondary site without manual intervention.
SAP Relevance: For SAP applications, which often have front-end components (e.g., SAP GUI or web interfaces), Traffic Manager ensures availability of these services across regions.
Why It Fits: This addresses the business continuity aspect by ensuring users can access the SAP environment even after a failover, complementing the DR setup in paired regions.

Warm standby virtual machine in an Azure Availability Set that uses geo-redundant storage (GRS)
Why Incorrect:
Availability Set Limitation: An Availability Set ensures VMs are spread across fault domains and update domains within a single region, not across regions. It doesn’t support the 250 km separation.
GRS Limitation: Geo-redundant storage (GRS) replicates data to a secondary region (meeting the distance requirement), but it’s a storage-level solution, not a VM-level solution. Warm standby VMs require compute readiness, not just storage replication, and GRS alone doesn’t ensure VM failover.
Ambiguity: The phrasing suggests a single-region solution with GRS, which doesn’t fully align with a comprehensive DR strategy for SAP VMs.
Context: This might contribute to data resilience, but it doesn’t provide a full DR solution with warm VMs in a secondary site.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

HOTSPOT

You have SAP ERP on Azure.

For SAP high availability, you plan to deploy ASCS/ERS instances across Azure Availability Zones and to use failover clusters.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No
To create a failover solution, you can use an Azure Basic Load Balancer for Azure virtual machines deployed across the Azure Availability Zones. ⭘ ⭘
You can deploy Azure Availability Sets within an Azure Availability Zone. ⭘ ⭘
The solution must use Azure managed disks. ⭘ ⭘

A

Final Answers:
To create a failover solution, you can use an Azure Basic Load Balancer for Azure virtual machines deployed across the Azure Availability Zones.
No (Requires Azure Standard Load Balancer for cross-zone failover support.)
You can deploy Azure Availability Sets within an Azure Availability Zone.
No (Availability Sets and Zones are distinct HA mechanisms; this scenario uses Zones.)
The solution must use Azure managed disks.
Yes (Managed disks are required for zonal deployments and SAP HA.)

Statement 1: “To create a failover solution, you can use an Azure Basic Load Balancer for Azure virtual machines deployed across the Azure Availability Zones.”
Answer: No
Reasoning: For failover cluster solutions involving SAP ASCS/ERS instances across Azure Availability Zones, you cannot use an Azure Basic Load Balancer. The Basic Load Balancer does not support cross-zone load balancing or the health probe functionality required for failover clusters (e.g., Windows Server Failover Clustering or Linux Pacemaker). Instead, the Azure Standard Load Balancer is required because it supports Availability Zones, provides health probes to detect active cluster nodes, and ensures proper routing in a high-availability (HA) setup. This is a key requirement for SAP HA deployments on Azure, as the Standard SKU is explicitly recommended for such scenarios.
Statement 2: “You can deploy Azure Availability Sets within an Azure Availability Zone.”
Answer: Yes

Statement 3: “The solution must use Azure managed disks.”
Answer: Yes
Reasoning: When deploying VMs across Azure Availability Zones for SAP workloads, Azure managed disks are mandatory. Unmanaged disks (e.g., those using storage accounts) are not supported for zonal deployments because they lack the flexibility and resilience required for HA configurations across zones. Managed disks provide features like zone-redundant storage (ZRS) or local redundancy within a zone, which align with SAP HA requirements. For failover clusters (e.g., using Azure shared disks for Windows or NFS for Linux), managed disks ensure consistent performance and availability, making them a requirement for this solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You deploy an SAP environment on Azure.

Your company has a Service Level Agreement (SLA) of 99.99% for SAP.

You implement Azure Availability Zones that have the following components:

  • Redundant SAP application servers
  • ASCS/ERS instances that use a failover cluster
  • Database high availability that has a primary instance and a secondary instance

You need to validate the load distribution to the application servers.

What should you use?

SAP Solution Manager
Azure Monitor
SAPControl
SAP Web Dispatcher

A

Final Answer:
SAP Web Dispatcher

Why SAP Web Dispatcher is the closest to the correct answer:
The question asks for a tool to “validate the load distribution to the application servers.” In an SAP environment deployed on Azure with redundant application servers, the SAP Web Dispatcher is the component responsible for distributing the load. By examining its configuration and logs, you can validate how traffic is being allocated across the servers. This aligns with the requirements of the Azure AZ-120 exam, which tests knowledge of SAP workload management on Azure, including high-availability configurations and load balancing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

HOTSPOT

For each of the following statements, select yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No
You can use NIPING to examine network latency between an SAP HANA database server and an SAP application server hosted on Azure. ⭘ ⭘
You can use LoadRunner to generate traffic between a client and an SAP application server hosted on Azure. ⭘ ⭘
You can use the SAP HANA HW Configuration Check Tool (HWCCT) to examine network latency between an SAP HANA database server and an SAP application server hosted on Azure. ⭘ ⭘

A

Final Answers:
Statement Yes No
You can use NIPING to examine network latency between an SAP HANA database server and an SAP application server hosted on Azure. ✔️
You can use LoadRunner to generate traffic between a client and an SAP application server hosted on Azure. ✔️
You can use the SAP HANA HW Configuration Check Tool (HWCCT) to examine network latency between an SAP HANA database server and an SAP application server hosted on Azure. ✔️
Summary of Reasoning:
NIPING (Yes): A dedicated SAP tool for measuring network latency between SAP components, applicable in Azure for HANA and application server communication.
LoadRunner (Yes): A performance testing tool that can simulate client traffic to an SAP application server, widely used for SAP workload testing on Azure.
HWCCT (No): Focused on validating SAP HANA hardware performance, not specifically designed for measuring latency between HANA and application servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

HOTSPOT

You have ah SAP environment on Azure that contains a single-tenant SAP NANA server at instance 03.

You need to monitor the network throughput from an SAP application server to the SAP HANA server.

How should you complete the script? To answer, select the appropriate options in the answer are. NOTE: Each correct selection is worth one point.
$HANA = <dropdown> -Name HANAP01-NIC -ResourceGroupName Production
Get-AzNetworkInterface
Get-AzNetworkUsage
Get-AzNetworkWatcher
Get-AzVM</dropdown>

$APP = Get-AzVM -Name AppP01 -ResourceGroupName Production

New-AzNetworkWatcherConnectionMonitor -NetworkWatcher (Get-AzNetworkWatcher)
-Name HANA -DestinationAddress (($HANA).IpConfigurations.PrivateIPAddress)
-DestinationPort <dropdown>
1433
1434
30115
30315
-SourceResourceId $APP.Id</dropdown>

A

Final Answers:
$HANA = <dropdown> -Name HANAP01-NIC -ResourceGroupName Production
Answer: Get-AzNetworkInterface
Why Correct: This cmdlet retrieves the network interface object, providing access to the private IP address required for the connection monitor.
-DestinationPort <dropdown>
Answer: 30315
Why Correct: This is the SQL port for SAP HANA instance 03, matching the scenario’s requirement for monitoring HANA communication.
Why These Are Correct for AZ-120:
Network Monitoring: The AZ-120 exam tests knowledge of Azure tools like Network Watcher for monitoring SAP workloads. New-AzNetworkWatcherConnectionMonitor is the correct cmdlet for monitoring network performance (e.g., throughput) between VMs, requiring the source VM ID, destination IP, and port.
SAP HANA Specifics: Understanding HANA’s port numbering (e.g., 3<instance>15) is critical for SAP-on-Azure deployments, a key focus of the exam.
Azure Resources: Using Get-AzNetworkInterface aligns with Azure’s resource model, where NICs are separate objects that hold IP configurations, a concept tested in AZ-120.</instance></dropdown></dropdown>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

HOTSPOT

You are deploying an SAP environment across Azure Availability Zones.

The environment has the following components:

✑ ASCS/ERS instances that use a failover cluster

✑ SAP application servers across the Azure Availability Zones

✑ Database high availability by using a native database solution

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No
Network latency is a limiting factor when deploying DBMS instances that use synchronous replication across the Azure Availability Zones. ⭘ ⭘
The performance of SAP systems can be validated by using ABAPMeter. ⭘ ⭘
To help identify the best Azure Availability Zones for deploying the SAP components, you can use NIPING to verify network latency between the zones. ⭘ ⭘

A

Final Answers:
Statement Yes No
Network latency is a limiting factor when deploying DBMS instances that use synchronous replication across the Azure Availability Zones. ✔️
The performance of SAP systems can be validated by using ABAPMeter. ✔️
To help identify the best Azure Availability Zones for deploying the SAP components, you can use NIPING to verify network latency between the zones.

Statement 1:
“Network latency is a limiting factor when deploying DBMS instances that use synchronous replication across the Azure Availability Zones.”

Answer: Yes
Why it’s correct:
Synchronous replication in a database management system (DBMS) requires that data written to the primary instance is replicated to the secondary instance before the transaction is considered complete. When deploying across Azure Availability Zones (which are physically separated data centers within the same region), network latency between zones can impact the performance of synchronous replication. Azure documentation and SAP best practices emphasize that low latency (typically less than 2-5 milliseconds) is critical for synchronous replication to avoid performance degradation. Since Availability Zones are geographically distinct, network latency is indeed a limiting factor that must be considered when designing high-availability DBMS solutions for SAP. This aligns with AZ-120 exam topics on planning SAP workloads with high availability.
Statement 2:
“The performance of SAP systems can be validated by using ABAPMeter.”

Answer: Yes
Why it’s correct:
ABAPMeter (transaction code STAD or related tools in SAP) is a performance analysis tool within the SAP ABAP environment. It allows administrators to measure and validate the performance of SAP systems by analyzing metrics such as response times, database calls, and CPU usage for ABAP-based transactions. In the context of an SAP environment on Azure, ABAPMeter can be used to assess the performance of SAP application servers and ensure they meet the required Service Level Agreements (SLAs). This is a valid tool for performance validation in SAP systems, making “Yes” the correct choice for this statement. This aligns with AZ-120’s focus on monitoring and optimizing SAP workloads.
Statement 3:
“To help identify the best Azure Availability Zones for deploying the SAP components, you can use NIPING to verify network latency between the zones.”

Answer: Yes
Why it’s correct:
NIPING is a network testing tool provided by SAP to measure network latency and bandwidth between systems. When planning an SAP deployment across Azure Availability Zones, verifying network latency is critical, especially for components like ASCS/ERS (failover clustering) and DBMS instances (synchronous replication), which are sensitive to delays. By running NIPING between virtual machines in different Availability Zones, you can collect latency data to determine if the zones meet SAP’s stringent network requirements (e.g., low latency for high-availability setups). This is a recommended practice in SAP-on-Azure deployments and is relevant to the AZ-120 exam’s emphasis on network planning and optimization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

HOTSPOT

for each of the following statements, select Yes if the statement is true. Otherwise. select No. NOTE: Each correct selection is worth one point.
Statements Yes No
When configuring an Azure virtual machine, the Azure Enhanced Monitoring features are required to monitor SAP application performance. ⭘ ⭘
To successfully start an Azure virtual machine that contains SAP, you must have Azure Enhanced Monitoring installed. ⭘ ⭘
If you deploy SAP by using the Azure Resource Manager templates for SAP, Azure Enhanced Monitoring is installed automatically. ⭘ ⭘

A

Final Answers:
Statement Yes No
When configuring an Azure virtual machine, the Azure Enhanced Monitoring features are required to monitor SAP application performance. ✔️
To successfully start an Azure virtual machine that contains SAP, you must have Azure Enhanced Monitoring installed. ✔️
If you deploy SAP by using the Azure Resource Manager templates for SAP, Azure Enhanced Monitoring is installed automatically. ✔️
Summary of Reasoning:
Monitoring Requirement (No): Azure Enhanced Monitoring enhances SAP performance monitoring but isn’t strictly required, as basic monitoring or SAP tools can still function without it.
VM Startup (No): Starting a VM with SAP doesn’t depend on the monitoring extension; it’s an operational add-on, not a startup necessity.
ARM Templates (Yes): Official Azure ARM templates for SAP automate the installation of Azure Enhanced Monitoring, aligning with best practices for SAP deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

HOTSPOT

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No
The Azure Enhanced Monitoring Extension for SAP stores performance data in an Azure Storage account. ⭘ ⭘
You can enable the Azure Enhanced Monitoring Extension for SAP on a SUSE Linux Enterprise Server 12 (SLES 12) server by running the Set-AzVMAEMExtension cmdlet. ⭘ ⭘
You can enable the Azure Enhanced Monitoring Extension for SAP on a server that runs Windows Server 2016 by running the Set-AzVMAEMExtension cmdlet. ⭘ ⭘

A

Final Answers:
Statement Yes No
The Azure Enhanced Monitoring Extension for SAP stores performance data in an Azure Storage account. ✔️
You can enable the Azure Enhanced Monitoring Extension for SAP on a SUSE Linux Enterprise Server 12 (SLES 12) server by running the Set-AzVMAEMExtension cmdlet. ✔️
You can enable the Azure Enhanced Monitoring Extension for SAP on a server that runs Windows Server 2016 by running the Set-AzVMAEMExtension cmdlet. ✔️
Summary of Reasoning:
Storage Account (No): The extension collects and provides real-time data to SAP systems but doesn’t store it in an Azure Storage account; storage is handled by other services like Azure Monitor.
SLES 12 (Yes): The Set-AzVMAEMExtension cmdlet enables the extension on SLES 12, a supported OS for SAP, aligning with Azure’s monitoring capabilities for Linux-based SAP workloads.
Windows Server 2016 (Yes): The same cmdlet enables the extension on Windows Server 2016, another supported OS, ensuring monitoring integration for Windows-based SAP deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

HOTSPOT

You are integrating SAP HANA and Azure Active Directory (Azure AD).

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No
SAP HANA supports SAML authentication for single-sign on (SSO). ⭘ ⭘
SAP HANA supports OAuth2 authentication for single-sign on (SSO). ⭘ ⭘
You can use Azure role-based access control (RBAC) to provide users with the ability to sign in to SAP HANA. ⭘ ⭘

A

Final Answer
SAP HANA supports SAML authentication for single-sign on (SSO): Yes
SAP HANA supports OAuth2 authentication for single-sign on (SSO): No
You can use Azure role-based access control (RBAC) to provide users with the ability to sign in to SAP HANA: No

  1. SAP HANA supports SAML authentication for single-sign on (SSO)
    Answer: Yes
    Why it’s correct:
    SAP HANA supports SAML (Security Assertion Markup Language) for single sign-on (SSO). SAML is a standard protocol for exchanging authentication and authorization data between an identity provider (IdP) like Azure AD and a service provider (SP) like SAP HANA.
    SAP HANA Context: Since SAP HANA 1.0 SPS 10 (and later versions), it has built-in support for SAML-based SSO. This allows users to authenticate to SAP HANA using credentials managed by an external IdP, such as Azure AD, without needing to re-enter credentials.
    Azure AD Integration: Azure AD supports SAML 2.0, and you can configure it as the IdP for SAP HANA by setting up an enterprise application in Azure AD, exchanging metadata, and configuring SAP HANA’s SAML settings (e.g., via HANA Cockpit or XS Admin).
    AZ-120 Relevance: SAML SSO is a common integration method for SAP HANA on Azure, making this statement true.
  2. SAP HANA supports OAuth2 authentication for single-sign on (SSO)
    Answer: No
    Why it’s incorrect:
    SAP HANA does not natively support OAuth2 for SSO in the traditional sense of user authentication to the database. OAuth2 is an authorization framework commonly used for API access and token-based authentication, not for direct user SSO to the HANA database.
    SAP HANA Context: While SAP HANA supports OAuth for specific scenarios (e.g., securing XS applications or REST APIs in SAP HANA Extended Application Services), this is for application-level access, not for SSO to the HANA database itself (e.g., via HANA Studio or JDBC/ODBC clients). SSO to SAP HANA relies on SAML, Kerberos, or X.509 certificates, not OAuth2.
    Azure AD Comparison: Azure AD supports OAuth2 for many applications, but SAP HANA’s SSO integration with Azure AD uses SAML, not OAuth2.
    AZ-120 Relevance: OAuth2 isn’t a standard SSO method for SAP HANA, making this statement false.
  3. You can use Azure role-based access control (RBAC) to provide users with the ability to sign in to SAP HANA
    Answer: No
    Why it’s incorrect:
    Azure RBAC (Role-Based Access Control) is an Azure-specific authorization system that manages permissions to Azure resources (e.g., VMs, storage accounts) at the management plane. It does not directly control authentication or access to applications like SAP HANA running on those resources.
    SAP HANA Context: Signing in to SAP HANA requires database-level authentication (e.g., via SAML, Kerberos, or username/password), managed within HANA’s own user management system. Azure RBAC can grant users permissions to manage the Azure VM hosting SAP HANA (e.g., start/stop), but it doesn’t provide sign-in capabilities to the HANA database itself.
    Azure AD vs. RBAC: Azure AD handles identity and authentication (e.g., SSO to SAP HANA via SAML), while RBAC handles resource permissions. These are distinct mechanisms, and RBAC doesn’t integrate with SAP HANA’s authentication.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

HOTSPOT

Your on-premises network contains SAP and non-SAP applications. ABAP-based SAP systems are integrated with IDAP and use user name/password-based authentication for logon. You plan to migrate the SAP applications to Azure.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No
Azure Active Directory (Azure AD) pass-through authentication enables users to connect to the ABAP-based SAP systems on Azure by using their on-premises user name/password. ⭘ ⭘
Azure Active Directory (Azure AD) password hash synchronization enables users to connect to the ABAP-based SAP systems on Azure by using their on-premises user name/password. ⭘ ⭘
Active Directory Federation Services (AD FS) supports authentication between on-premises Active Directory and Azure systems that use different domains. ⭘ ⭘

A

Final Answers:
Statement Yes No
Azure Active Directory (Azure AD) pass-through authentication enables users to connect to the ABAP-based SAP systems on Azure by using their on-premises user name/password. ✔️
Azure Active Directory (Azure AD) password hash synchronization enables users to connect to the ABAP-based SAP systems on Azure by using their on-premises user name/password. ✔️
Active Directory Federation Services (AD FS) supports authentication between on-premises Active Directory and Azure systems that use different domains. ✔️
Summary:
Statement 1: No – ABAP-based SAP systems don’t natively support Azure AD PTA for logon without additional integration.
Statement 2: No – PHS also doesn’t directly enable ABAP logon without further SSO configuration.
Statement 3: Yes – AD FS supports federation across different domains, a standard capability in hybrid Azure setups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You deploy on SAP environment on Azure.

You need to monitor the performance of the SAP NetWeaver environment by using the Azure Enhanced Monitoring Extension for SAP.

What should you do first?

From Azure CLI, install the Linux Diagnostic Extension.
From the Azure portal, enable the Azure Network Watcher Agent.
From the Azure portal, enable the Custom Script Extension.
From Azure CL
run the az v aem m set command.

A

Final Answer:
From Azure CLI, run the az v aem m set command. (Corrected to az vm aem set)

Why Correct?
Direct Action: The az vm aem set command specifically installs and configures the Azure Enhanced Monitoring Extension for SAP, which is the tool required to monitor SAP NetWeaver performance. It’s a single, actionable step that meets the question’s objective.
First Step: In the context of enabling the extension, running this command is the initial action to deploy it to the VM. While prerequisites like deploying the VM and installing SAP software are assumed (since the environment is already deployed), this is the first step specific to enabling the monitoring extension.
OS Agnostic: The command works for both Windows and Linux VMs, aligning with SAP NetWeaver’s flexibility (e.g., Windows Server or SLES/RHEL), and the question doesn’t specify an OS, making it broadly applicable.
AZ-120 Alignment: The exam tests knowledge of SAP-specific tools and extensions on Azure. The az vm aem set command is a documented method for enabling the Azure Enhanced Monitoring Extension, as seen in Azure’s SAP workload documentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

HOTSPOT

You have an SAP development landscape on Azure.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No
You can use SAP Landscape Management (LaMa) to automate stopping, starting, and deallocating SAP virtual machines. ⭘ ⭘
You can use SAP Solution Manager to automate stopping, starting, and deallocating SAP virtual machines. ⭘ ⭘
You can use SAP HANA Cockpit to automate stopping, starting, and deallocating SAP virtual machines. ⭘ ⭘

A

Correct Answers:
You can use SAP Landscape Management (LaMa) to automate stopping, starting, and deallocating SAP virtual machines: Yes
You can use SAP Solution Manager to automate stopping, starting, and deallocating SAP virtual machines: No
You can use SAP HANA Cockpit to automate stopping, starting, and deallocating SAP virtual machines: No

Why These Are Correct:
SAP LaMa (Yes):
LaMa is purpose-built for SAP system orchestration and, with the Azure Adapter, can automate VM operations like starting, stopping, and deallocating in Azure. This aligns with AZ-120’s focus on managing SAP landscapes in the cloud.
SAP Solution Manager (No):
SolMan is a monitoring and management tool, not an automation platform for Azure VM operations. It lacks native VM control capabilities, making it unsuitable for this task in the context of the exam.
SAP HANA Cockpit (No):
HANA Cockpit is a database-specific tool and cannot manage Azure VMs. It’s limited to HANA database operations, not infrastructure automation, as required by the AZ-120 objectives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You migrate an SAP environment to Azure.

You need to inspect all the outbound traffic from the SAP application servers to the Internet.

Which two Azure resources should you use? Each correct answer presents part of the solution. Network Performance Monitor

Azure Firewall
Azure Traffic Manager
Azure Load Balancer NAT rules
Azure user-defined routes
a web application firewall (WAF) for Azure Application Gateway

A

Final Answer:
Azure Firewall
Azure user-defined routes

Why These Are Correct:
Azure Firewall:
Purpose: Azure Firewall provides deep packet inspection, filtering, and logging for outbound traffic. It can analyze traffic from SAP application servers to the Internet, enforce security policies (e.g., allow only specific SAP-related outbound connections), and log details for auditing.
AZ-120 Relevance: For SAP on Azure, Azure Firewall is a recommended security solution to monitor and secure traffic, aligning with the exam’s focus on network security and compliance.
Fit: Directly meets the goal of inspecting outbound traffic.
Azure user-defined routes (UDR):
Purpose: UDRs ensure all outbound traffic from the SAP application servers’ subnet is routed through Azure Firewall (e.g., by setting the next hop to the firewall’s IP). Without UDRs, traffic might bypass the firewall via Azure’s default Internet routing, preventing full inspection.
AZ-120 Relevance: The exam emphasizes network design for SAP, including forcing traffic through security appliances. UDRs are a standard practice in Azure hub-and-spoke architectures for SAP deployments.
Fit: Complements Azure Firewall by directing traffic for inspection, forming a complete solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

DRAG DROP

You have an on-premises SAP environment that runs on SUSE Linux Enterprise Server (SLES) servers and Oracle. The version of the SAP ERP system is 6.06 and the version of the portal is SAP NetWeaver 7.3.

You need to recommend a migration strategy to migrate the SAP ERP system and the portal to Azure.

The solution must be hosted on SAP HANA.

What should you recommend? To answer, drag the appropriate tools to the correct components. Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Tools

SAP heterogeneous system copy
Software Update Manager (SUM) Database Migration Option (DMO) with System Update
Software Update Manager (SUM) Database Migration Option (DMO) with System Move
Software Update Manager (SUM) Database Migration Option (DMO) without System Update
Answer Area

To migrate the SAP ERP system: [________]
To migrate the portal: [________]

A

Final Answers:
Answer Area:

To migrate the SAP ERP system: Software Update Manager (SUM) Database Migration Option (DMO) with System Move
To migrate the portal: SAP heterogeneous system copy

Why These Are Correct:
SAP ERP (ECC 6.06) - SUM DMO with System Move:
Reason: ECC is an ABAP system, and SUM DMO is optimized for ABAP migrations to HANA. “System Move” supports the platform shift to Azure while converting the database from Oracle to HANA. It’s efficient and aligns with SAP’s modern migration tools, per SAP Note 1813548 and Azure documentation.
AZ-120 Fit: The exam emphasizes DMO for ABAP systems moving to HANA on Azure.
Portal (NetWeaver 7.3) - SAP heterogeneous system copy:
Reason: NetWeaver 7.3 Java doesn’t support SUM DMO (DMO is ABAP-only). A heterogeneous system copy is the SAP-certified approach for Java stacks, involving database export/import (Oracle to HANA) and platform migration to Azure, per SAP’s migration guides.
AZ-120 Fit: Tests knowledge of distinct migration strategies for ABAP vs. Java SAP systems.
Validation:
ECC: SUM DMO with System Move keeps the process streamlined, avoiding unnecessary updates unless required (not specified).
NetWeaver: Heterogeneous system copy is the only viable option from the list for Java systems, ensuring HANA compatibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
DRAG DROP A customer has an on-premises SAP environment. The customer plans to migrate SAP to Azure. You need to prepare the environment for the planned migration. Which three actions should you perform in sequence before the migration? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions Run a compatibility assessment and resolve any issues. Create a conditional access policy. Deploy the core networking components to Azure. Build Azure virtual machines. Back up the infrastructure. Create an ExpressRoute connection. Answer Area
Final Answer Area (Sequence): Run a compatibility assessment and resolve any issues Deploy the core networking components to Azure Create an ExpressRoute connection Why This Is Correct: Step 1 (Compatibility Assessment): Ensures the SAP environment is ready for Azure, addressing blockers early (e.g., unsupported SAP versions), aligning with AZ-120’s focus on planning. Step 2 (Core Networking): Sets up the Azure network foundation, a dependency for connectivity and VM deployment, reflecting Azure’s infrastructure-first approach for SAP. Step 3 (ExpressRoute): Establishes the connectivity needed for data transfer or replication, a practical pre-migration step for SAP workloads. Minimizing Exclusions: The excluded actions either happen during/post-migration (VMs, backups) or are unrelated to infrastructure prep (conditional access). Logical Order: Assessment informs network design, networking enables connectivity, and ExpressRoute facilitates the migration process.
24
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Statements You must split data files and database logs between different Azure virtual disks to increase the database read/write performance. ( ) Yes ( ) No Enabling Accelerate Networking on virtual NICs for all SAP servers will reduce network latency between the servers. ( ) Yes ( ) No When you use SAP HANA on Azure (Large Instances), you should set the MTU on the primary network interface to match the MTU on SAP application servers to reduce CPU utilization and network latency. ( ) Yes ( ) No
Final Answers for Hotspot: You must split data files and database logs between different Azure virtual disks to increase the database read/write performance: Yes Enabling Accelerated Networking on virtual NICs for all SAP servers will reduce network latency between the servers: Yes When you use SAP HANA on Azure (Large Instances), you should set the MTU on the primary network interface to match the MTU on SAP application servers to reduce CPU utilization and network latency: Yes Statement 1: "You must split data files and database logs between different Azure virtual disks to increase the database read/write performance." Analysis: Context: Applies to SAP databases on Azure VMs (e.g., SAP HANA, SQL Server, Oracle), where storage performance is critical. Azure Virtual Disks: These are managed disks (e.g., Premium SSD, Ultra Disk) attached to VMs. Splitting data files (e.g., /hana/data) and logs (e.g., /hana/log) across separate disks is a common practice to improve I/O performance. Reasoning: Separating data and log files reduces I/O contention, as data reads/writes and log writes have different patterns (sequential vs. random). Azure’s disk performance (IOPS/throughput) is per-disk, so multiple disks increase aggregate performance, especially for write-heavy workloads like SAP HANA logs. SAP HANA on Azure documentation recommends this for optimal performance (e.g., using Ultra Disks for logs and Premium SSDs for data in some cases). “Must” Consideration: While splitting is a best practice and often necessary for high-performance SAP systems (e.g., to meet HANA’s <1ms latency for logs), it’s not an absolute requirement in all cases. Smaller or less demanding systems might function on a single disk, though performance would suffer. For AZ-120, “must” typically aligns with SAP certification or Azure best practices for production, where performance optimization is critical. Answer: Yes Why: Splitting data and logs across disks is a standard requirement to maximize database read/write performance in SAP on Azure, per Microsoft’s SAP deployment guides. Statement 2: "Enabling Accelerated Networking on virtual NICs for all SAP servers will reduce network latency between the servers." Analysis: Accelerated Networking: A feature in Azure that offloads network processing to the NIC (via SR-IOV), bypassing the VM’s virtual switch, reducing latency and CPU usage. SAP Servers: Refers to application servers (e.g., NetWeaver PAS/AAS), Web Dispatchers, or database servers (e.g., HANA) on Azure VMs. Latency Reduction: Accelerated Networking lowers latency for traffic to/from the VM by eliminating software switch overhead—effective for traffic between servers (e.g., app server to database) within a VNet or across peered VNets. Supported on specific VM sizes (e.g., D/DSv2, E/ESv3) and OSes (e.g., SLES, RHEL, Windows), common for SAP deployments. “All SAP Servers” Consideration: Most SAP servers benefit (e.g., app-to-DB traffic, HANA replication), but some legacy or small VM sizes don’t support it. However, production SAP systems typically use supported sizes. Latency reduction is consistent for intra-VNet or cross-zone traffic, key for SAP’s distributed architecture. Answer: Yes Why: Enabling Accelerated Networking reduces network latency between SAP servers by optimizing packet processing, a key performance enhancement in Azure SAP deployments (AZ-120 focus on networking). Statement 3: "When you use SAP HANA on Azure (Large Instances), you should set the MTU on the primary network interface to match the MTU on SAP application servers to reduce CPU utilization and network latency." Analysis: SAP HANA on Azure (Large Instances): Bare-metal servers optimized for HANA, connected to Azure VNets via ExpressRoute or VPN, with dedicated network interfaces. MTU (Maximum Transmission Unit): Defines the largest packet size a network interface can send without fragmentation. Mismatched MTUs cause fragmentation, increasing CPU usage and latency. HANA Large Instances Networking: HANA LIs connect to SAP application servers (on VMs) over a high-speed network. The default MTU for HANA LI is typically 1500 bytes, but Azure supports jumbo frames (e.g., 9000 bytes) for higher throughput. Microsoft’s documentation recommends setting the MTU on HANA LI and app server NICs to match (e.g., 9000) when using jumbo frames to optimize performance. Benefits: Matching MTUs avoids fragmentation, reducing CPU overhead (for packet reassembly) and latency (fewer packets). Critical for HANA’s high-throughput workloads (e.g., data replication, app server queries). “Should” Consideration: It’s a recommended practice, not mandatory, but aligns with performance optimization for HANA LI, especially in production. Answer: Yes Why: Matching MTU settings between HANA Large Instances and SAP application servers reduces CPU utilization and latency, per Azure’s SAP HANA LI deployment best practices (AZ-120 focus on HANA optimization).
25
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to migrate an SAP HANA instance to Azure. You need to gather CPU metrics from the last 24 hours from the instance. Solution: You use DBA Cockpit from SAP GUI. Does this meet the goal? Yes No
Final Answer: Yes Why It Meets the Goal: Functionality: DBA Cockpit can retrieve CPU metrics from the last 24 hours, assuming the HANA instance has been running and collecting statistics (default behavior in HANA). Relevance: The solution directly addresses the need to gather CPU data from the existing HANA instance, which is critical for Azure planning (e.g., selecting an M-series VM). SAP Support: DBA Cockpit is an SAP-provided tool, widely used for HANA monitoring, and aligns with AZ-120’s focus on SAP-native administration.
26
You are migrating SAP to Azure. The ASCS application servers are in one Azure zone, and the SAP database server in in a different Azure zone. ASCS/ERS is configured for high availability. During performance testing, you discover increased response times in Azure, even though the Azure environment has better computer and memory configurations than the on-premises environment. During the initial analysis, you discover an increased wait time for Enqueue. What are three possible causes of the increased wait time? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. a missing Enqueue profile disk I/O during Enqueue backup operations misconfigured load balancer rules and health check probes for Enqueue and ASCS active Enqueue replication network latency between the database server and the SAP application servers
Correct Answers (Three Causes): C. Misconfigured load balancer rules and health check probes for Enqueue and ASCS D. Active Enqueue replication E. Network latency between the database server and the SAP application servers Why These Are Correct: C (Load Balancer): In an Azure HA setup across zones, the load balancer is critical for directing enqueue traffic. Misconfiguration (e.g., wrong probe port, timeout settings) can delay lock processing, directly increasing Enqueue wait time. This aligns with AZ-120’s focus on Azure-specific HA configurations for SAP. D (Enqueue Replication): Active replication to ERS is standard in ASCS/ERS HA. If not optimized (e.g., synchronous over a slow link), it can bottleneck the Enqueue Server, a common issue in SAP performance tuning. E (Network Latency): The zonal separation introduces network latency, which can slow down enqueue-related database interactions. Azure’s inter-zone latency, while low, is still a factor for SAP’s real-time lock management, making this a realistic cause.
27
You have an on-premises SAP environment that uses AIX servers and IBM DB2 as the database platform. You plan to migrate SAP to Azure. In Azure, the SAP workloads will use Windows Server and Microsoft SQL Server as the database platform. What should you use to export from DB2 and import the data to SQL Server? R3load Azure SQL Data Warehouse SQL Server Management Studio (SSMS) R3trans
Final Answer: A R3load is the correct tool to export data from IBM DB2 and import it into Microsoft SQL Server for the SAP migration to Azure, aligning with AZ-120 objectives. Correct Answer: A. R3load Why it’s correct: SAP Migration Standard: R3load is the SAP-provided tool for exporting data from a source database (DB2 on AIX) and importing it into a target database (SQL Server on Windows) during a heterogeneous system copy, ensuring SAP compatibility. Process Fit: It generates dump files from DB2 and imports them into SQL Server, handling schema, data, and SAP-specific structures—ideal for this migration. Azure/SAP Alignment: Microsoft’s SAP on Azure migration guides (e.g., for AZ-120) recommend R3load for moving SAP workloads to SQL Server, making it the supported and correct choice. Heterogeneous Support: Specifically designed for OS/DB changes, unlike generic tools like SSMS.
28
HOTSPOT You are designing the backup for an SAP database. You have an Azure Storage account that is configured as shown in the following exhibit. The cost of your storage account depends on the usage and the options you choose below. [Learn more] Account kind StorageV2 (general purpose v2) Performance (●) Standard ( ) Premium Secure transfer required ( ) Disabled (●) Enabled Access tier (default) (●) Cool ( ) Hot Replication Geo-redundant storage (GRS) (Dropdown selection) Azure Active Directory authentication for Azure Files (Preview) (●) Disabled ( ) Enabled Data Lake Storage Gen2 Hierarchical namespace (●) Disabled ( ) Enabled Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Data in the storage account is stored on [answer choice] hard disk drives (HDDs) premium solid-state drives (SSDs) standard solid-state drives (SSDs) Backups will be replicated [answer choice] to a storage cluster in the same datacenter to another Azure region to another zone within the same Azure region
Final Answers: Data in the storage account is stored on: hard disk drives (HDDs) Backups will be replicated: to another Azure region Why These Are Correct: HDDs: Reason: The Standard performance tier in a StorageV2 account uses HDDs, as per Azure documentation (e.g., “Azure Blob Storage performance tiers”). This matches the Cool tier’s cost-effective design for backups, a common choice for SAP HANA backup storage (e.g., via backint or snapshots). AZ-120 Fit: Tests understanding of storage tiers for SAP workloads, where Standard HDDs suffice for backup scenarios. Another Azure region: Reason: GRS replicates data to a secondary region asynchronously, ensuring backups are protected against regional failures, per “Azure Storage redundancy” documentation. This aligns with SAP DR best practices. AZ-120 Fit: Emphasizes redundancy options for SAP data protection, with GRS being a robust choice.
29
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Statements | Yes | No Oracle Real Application Clusters (RAC) can be used to provide high availability of SAP databases on Azure. | ( ) | ( ) You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016. | ( ) | ( ) You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12). | ( ) | ( )
Final Answer Area: Oracle Real Application Clusters (RAC) can be used to provide high availability of SAP databases on Azure. ( ) Yes (x) No You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016. (x) Yes ( ) No You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12). (x) Yes ( ) No Why These Answers Are Correct: Statement 1 (No): Oracle RAC is not supported by SAP for SAP databases, even on Azure. SAP prefers alternatives like Oracle Data Guard or Azure HA features, aligning with AZ-120’s focus on SAP-certified configurations. Statement 2 (Yes): Windows Server 2016 with Oracle Database is a supported platform for SAP on Azure, per SAP and Microsoft documentation. Statement 3 (Yes): SLES 12 with Oracle Database is a standard, certified setup for SAP workloads on Azure, widely used and supported.
29
DRAG DROP You migrate SAP ERP Central Component (SAP ECC) production and non-production landscapes to Azure. You are licensed for SAP Landscape Management (LaMa). You need to refresh from the production landscape to the non-production landscape. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions | Answer Area From the Azure portal, create a service principal From the Cloud Managers tab in LaMa, add an adapter From SAP Solution Manager, deploy the LaMa adapter Add permissions to the service principal Install and configure LaMa on an SAP NetWeaver instance
Final Sequence: Install and configure LaMa on an SAP NetWeaver instance From the Azure portal, create a service principal Add permissions to the service principal From the Cloud Managers tab in LaMa, add an adapter Why This is Correct: Logical Flow: LaMa must be installed first (Step 1) to provide the platform for management. Azure integration starts with creating a service principal (Step 2) and granting it permissions (Step 3). The Azure adapter (Step 4) connects LaMa to Azure, enabling the refresh operation. SAP LaMa on Azure: Microsoft’s SAP on Azure documentation (e.g., for AZ-120) outlines this process: install LaMa, configure Azure credentials (service principal), and add the Azure adapter to manage landscapes. Refresh Enablement: With this setup, LaMa can orchestrate the refresh (e.g., stop non-production VMs, copy production data, restart), meeting the requirement. AZ-120 Alignment: Tests knowledge of LaMa integration with Azure for SAP landscape management, a key exam topic.
30
You have an SAP environment that is managed by using VMware vCenter. You plan to migrate the SAP environment to Azure. You need to gather information to identify which compute resources are required in Azure. What should you use to gather the information? Azure Migrate and SAP EarlyWatch Alert reports Azure Site Recovery and SAP Quick Sizer SAP Quick Sizer and SAP HANA system replication Azure Site Recovery Deployment Planner and SAP HANA Cockpit
Final Answer: Azure Migrate and SAP EarlyWatch Alert reports Why A is Correct: Azure Migrate: Discovers all VMware VMs (e.g., HANA DB, app servers) via vCenter integration. Collects historical performance data (CPU, memory) to recommend Azure VM sizes (e.g., M128s for HANA). Maps VMware resources to Azure, ensuring compatibility (e.g., SAP-certified VMs). SAP EarlyWatch Alert reports: Provides SAP-specific metrics (e.g., SAPS, CPU load) across the environment (HANA, app servers). Complements Azure Migrate by adding SAP workload context, critical for accurate sizing. Combined Strength: Covers both infrastructure (VMware) and application (SAP) layers, ensuring compute resources are sized for the entire SAP environment on Azure. AZ-120 Alignment: Microsoft recommends Azure Migrate for SAP migrations (per “SAP on Azure migration guide”), and EarlyWatch is a standard SAP tool for performance analysis, matching exam objectives.
31
You plan to migrate an SAP ERP Central Component (SAP ECC) production system to Azure. You are reviewing the SAP EarlyWatch Alert report for the system. You need to recommend sizes for the Azure virtual machines that will host the system. Which two sections of the report should you review? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. Hardware Capacity Patch Levels under SAP Software Configuration Hardware Configuration under Landscape Database and ABAP Load Optimization Data Volume Management
Correct Answers (Two Sections): A. Hardware Capacity C. Hardware Configuration under Landscape Why These Are Correct: A. Hardware Capacity: This section provides utilization data (CPU, memory, I/O) from the running SAP ECC system, showing actual resource demands under production load. For example, if CPU usage peaks at 80% on 16 cores, you can select an Azure VM with sufficient vCPUs (e.g., E16s v5). This is critical for right-sizing VMs in Azure and aligns with AZ-120’s focus on performance-based planning. C. Hardware Configuration under Landscape: This section details the existing on-premises hardware (e.g., 32 GB RAM, 8 CPUs), serving as a baseline for selecting Azure VMs with equivalent or better specs. For instance, if the current system has 64 GB RAM, you might choose an M64s VM. It complements utilization data for a complete sizing picture.
32
You plan to migrate an SAP environment to Azure. You need to recommend a solution to migrate the SAP application servers to Azure. The solution must minimize downtime and changes to the environments. What should you include in the recommendation? Azure Storage Explorer Azure Import/Export service AzCopy Azure Site Recovery
Final Answer: Azure Site Recovery Why Azure Site Recovery is Correct: Minimizes Downtime: ASR replicates the SAP application servers to Azure in near real-time. During migration, you can perform a test failover to validate the setup, then execute a planned failover with minimal disruption (often just minutes), meeting the "minimize downtime" requirement. Minimizes Changes to the Environments: ASR lifts and shifts the existing VMs or servers as-is, including the OS, SAP application files, and configurations. This avoids the need to reinstall or reconfigure the SAP environment in Azure, aligning with the "minimize changes" requirement. SAP Support: Azure Site Recovery is explicitly supported for SAP workloads in the AZ-120 exam context. Microsoft documentation and SAP Notes recommend ASR for migrating SAP systems to Azure, especially for application servers, due to its reliability and minimal impact.
33
You plan to migrate an on-premises SAP development system to Azure. Before the migration, you need to check the usage of the source system hardware, such as CPU, memory, network, etc. Which transaction should you run from SAP GUI? SM51 DB01 DB12 OS07N
Correct Answer: OS07N Why Correct: The OS07N transaction (Operating System Monitor) is the appropriate SAP GUI tool for monitoring hardware resource usage on the source system. It provides real-time and historical data on CPU, memory, network, and other system-level metrics, which are critical for assessing the system’s performance profile before migration to Azure. This information helps in sizing the target Azure VMs (e.g., determining vCPU, RAM, and network requirements) and ensuring the migrated system meets performance needs. For the AZ-120 exam, understanding how to gather such pre-migration data using SAP tools is essential, and OS07N aligns perfectly with this requirement.
34
Your company has an SAP environment that contains the following components: ✑ SAP systems based on SAP HANA and SAP Adaptive Server Enterprise (SAP ASE) that run on SUSE ✑ Linux Enterprise Server 12 (SLES 12) ✑ Multiple SAP applications The company plans to migrate all the applications to Azure. You need to get a comprehensive list of all the applications that are part of the SAP environment. What should you use? the SAP license information the SAP Solution Manager the data volume management report the network inventory and locations
Final Answer: The SAP Solution Manager Why Correct? Comprehensive Inventory: SAP Solution Manager’s System Landscape Directory (SLD) and Landscape Management Database (LMDB) provide a detailed, centralized view of all SAP applications (e.g., SAP ERP, CRM, BW) and their dependencies (e.g., SAP HANA, SAP ASE) in the environment. This includes both standard SAP applications and custom configurations. Migration Planning: For migrating to Azure, SolMan is a standard tool to assess the SAP landscape, ensuring all applications are identified, sized, and mapped to Azure resources (e.g., VMs, databases). AZ-120 Alignment: The exam focuses on SAP-specific tools and processes for Azure migration. Solution Manager is explicitly designed for managing SAP environments and is recommended by SAP and Microsoft for such tasks. Fit for Purpose: Unlike the other options, SolMan directly addresses the need for a “comprehensive list of all applications,” covering both SAP HANA and SAP ASE-based systems and additional SAP applications.
35
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You deploy SAP HANA on Azure (Large Instances). You need to back up the SAP HANA database to Azure. Solution: You create a Recovery Services vault and a backup policy. Does this meet the goal? Yes No
Final Answer: No Why "No" is Correct: The goal is to back up the SAP HANA database on Azure (Large Instances). Creating a Recovery Services vault and a backup policy is a solution tailored for Azure VMs or other supported workloads, not for HANA Large Instances. Per Microsoft documentation and SAP Notes (e.g., SAP Note 2527145), HLI backups leverage storage snapshots and HANA-specific tools, not Azure Backup or Recovery Services vaults. This aligns with the AZ-120 exam’s focus on understanding the distinct deployment models for SAP workloads on Azure, including the unique management of HANA Large Instances.
36
HOTSPOT You plan to deploy a highly available ASCS instance to SUSE Linux Enterprise Server (SLES) virtual machines in Azure. You are configuring an internal Azure Standard Load Balancer for the ASCS instance. How should you configure the internal Standard Load Balancer? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Session persistence: 🔽 (Dropdown options) Client IP Client IP and Protocol None Floating IP (direct server return): 🔽 (Dropdown options) Disabled Enabled
Final Answers: Session Persistence: None Why Correct: ASCS HA relies on the cluster (e.g., Pacemaker) for state management, not load balancer stickiness. "None" ensures simple distribution to the active node. Floating IP (Direct Server Return): Enabled Why Correct: Floating IP is required to support the cluster’s virtual IP, ensuring seamless failover and direct routing to the active ASCS node. Correct Answer: None Reasoning: In an SAP ASCS HA setup with a failover cluster (e.g., Pacemaker), the load balancer’s role is to direct traffic to the active cluster node hosting the ASCS instance. Session persistence is not required because ASCS is a single active instance at any time (active/passive HA model), and the cluster manages failover. The SAP enqueue server (part of ASCS) maintains its own session state via the enqueue replication server (ERS), so client stickiness at the load balancer level is unnecessary. Azure documentation for SAP HA on SLES with an internal Standard Load Balancer recommends "None" for session persistence, as the cluster ensures continuity, and load balancing is simply about reaching the active node. Correct Answer: Enabled Reasoning: For SAP ASCS HA on Azure with SLES, the internal Standard Load Balancer must support the cluster’s failover mechanism (e.g., Pacemaker). Enabling Floating IP is critical because it allows the load balancer’s frontend IP to be dynamically associated with the active cluster node’s backend IP. In this setup, the ASCS instance uses a virtual IP (VIP) managed by the cluster. When failover occurs, the VIP moves to the new active node, and the load balancer’s Floating IP ensures traffic is routed correctly to that node without additional NAT overhead. Azure’s SAP HA architecture (e.g., using SLES with Pacemaker) explicitly requires Floating IP to be enabled on the load balancer for ASCS, as documented in Azure’s SAP deployment guides and validated in AZ-120 scenarios.
36
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Statements | Yes | No SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/data volume. | ( ) | ( ) SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/log volume. | ( ) | ( ) To enable Write Accelerator, you must use Azure Premium managed disks. | ( ) | ( )
Final Answers: Statement Yes No SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/data volume. ✔️ SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/log volume. ✔️ To enable Write Accelerator, you must use Azure Premium managed disks. ✔️ Summary of Reasoning: /hana/data (No): Write Accelerator isn’t required for /hana/data in SAP HANA certification; Premium SSDs alone suffice for this volume’s needs. /hana/log (Yes): Write Accelerator is a certification requirement for /hana/log on M-Series VMs to ensure low-latency log writes, critical for SAP HANA performance. Premium SSDs (Yes): Write Accelerator is only supported on Premium SSD managed disks, a technical necessity for enabling the feature.
37
You have an SAP environment on Azure that uses multiple subscriptions. To meet GDPR requirements, you need to ensure that virtual machines are deployed only to the West Europe and North Europe Azure regions. Which Azure components should you use? Azure resource locks and the Compliance admin center Azure resource groups and role-based access control (RBAC) Azure management groups and Azure Policy Azure Security Center and Azure Active Directory (Azure AD) groups
Final Answer: Azure management groups and Azure Policy Why "Azure management groups and Azure Policy" is Correct: GDPR Requirement: GDPR often involves data residency restrictions, requiring data (e.g., SAP workloads on VMs) to stay within specific regions like West Europe and North Europe. Azure Policy can enforce this by denying VM deployments outside these regions. Multiple Subscriptions: Management groups enable centralized policy enforcement across all subscriptions in the SAP environment, ensuring consistency. AZ-120 Relevance: The exam emphasizes governance, compliance, and resource management for SAP on Azure, making this a textbook solution for controlling deployment locations.
38
HOTSPOT You have an Azure Availability Set that is configured as shown in the following exhibit. PS Azure:\> get-azavailabilityset | Select Sku, PlatformFaultDomainCount, PlatformUpdateDomainCount, name, type | FL Sku : Aligned PlatformFaultDomainCount : 2 PlatformUpdateDomainCount : 4 Name : SAP-Databases-AS Type : Microsoft.Compute/availabilitySets Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Virtual machines that share [answer choice] will be susceptible to a storage outage. [Dropdown options] - aligned SKUs - the same fault domain - the same update domain Virtual machines in the Azure Availability Set can support [answer choice]. [Dropdown options] - datacenter outages - managed disks - regional outages
Final Answers: Virtual machines that share [answer choice] will be susceptible to a storage outage. The same fault domain Why Correct: VMs in the same fault domain share hardware, including storage, so a fault domain failure (e.g., storage outage) impacts them together. Virtual machines in the Azure Availability Set can support [answer choice]. Managed disks Why Correct: VMs in an Availability Set can use managed disks, a critical feature for SAP workloads on Azure, while they don’t inherently protect against datacenter or regional outages. Why These Are Correct for AZ-120: Fault Domains: The AZ-120 exam emphasizes understanding Availability Sets for SAP HA, including how fault domains isolate hardware failures (like storage). "The same fault domain" is a precise match for susceptibility to such outages. Managed Disks: SAP deployments on Azure (e.g., databases like in SAP-Databases-AS) often require managed disks for performance and reliability. The exam tests knowledge of Azure features supporting SAP, and this option aligns with VM capabilities in an Availability Set, unlike the outage options which exceed its scope.
38
You plan to deploy an SAP environment on Azure that will use Azure Availability Zones. Which load balancing solution supports the deployment? Azure Basic Load Balancer Azure Standard Load Balancer Azure Application Gateway v1 SKU
Final Answer: Azure Standard Load Balancer Why Correct? Availability Zones Support: The Standard Load Balancer is designed to work with Availability Zones, offering zone-redundant or zonal configurations. This ensures traffic distribution across VMs in different zones, meeting the HA requirement for an SAP environment. SAP Workload Fit: For SAP deployments (e.g., SAP NetWeaver application servers or HANA clustering), the Standard Load Balancer is the recommended solution in Azure documentation and SAP Notes (e.g., SAP Note 2521645). It supports the TCP/UDP traffic typical of SAP components. AZ-120 Alignment: The exam tests knowledge of HA and load balancing options for SAP on Azure. The Standard Load Balancer is explicitly highlighted in Azure’s SAP workload guidance for zoned deployments, making it the correct choice. Scalability and Features: Unlike the Basic SKU, the Standard SKU provides the performance and resilience needed for production SAP systems, and it outclasses the Application Gateway v1 SKU for this scenario.
39
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You deploy SAP HANA on Azure (Large Instances). You need to back up the SAP HANA database to Azure. Solution: You configure DB13 to back up directly to a local disk. Does this meet the goal? Yes No
Final Answer: No Why "No" is Correct: The solution (backing up to a local disk via DB13) does create a backup of the SAP HANA database, but it fails to meet the stated goal of backing up to Azure. The backup remains on the HLI server’s local storage, not in Azure’s cloud infrastructure. For Azure (Large Instances), the expected backup solution involves using SAP HANA tools to write backups to Azure Blob Storage or leveraging HLI’s snapshot capabilities, as outlined in Microsoft documentation and SAP Notes (e.g., SAP Note 2527145). This aligns with the AZ-120 exam’s emphasis on understanding Azure-specific features and best practices for SAP HANA deployments, including backup strategies.
39
You plan to deploy an SAP environment on Azure. During a bandwidth assessment, you identify that connectivity between Azure and an on-premises datacenter requires up to 5 Gbps. You need to identify which connectivity method you must implement to meet the bandwidth requirement. The solution must minimize costs. Which connectivity method should you identify? an ExpressRoute connection an Azure site-to-site VPN that is route-based an Azure site-to-site VPN that is policy-based Global VNet peering
Correct Answer: An ExpressRoute connection Why Correct: Bandwidth Requirement: The requirement is "up to 5 Gbps." ExpressRoute supports this easily, with circuit options starting at 50 Mbps and scaling to 10 Gbps or more (e.g., a 5 Gbps circuit is readily available). VPN options (route-based or policy-based) max out at 1.25 Gbps, far below the need. SAP on Azure: For SAP environments, ExpressRoute is the preferred connectivity method due to its high bandwidth, low latency, and reliability, which are critical for SAP workloads (e.g., HANA database replication or application server traffic). The AZ-120 exam emphasizes this for production-grade SAP deployments. Cost Minimization: While ExpressRoute is more expensive than a VPN, the question specifies "meet the bandwidth requirement" as the primary constraint, with cost minimization secondary. Among solutions that meet 5 Gbps, ExpressRoute is the only viable option. A lower-tier ExpressRoute circuit (e.g., 5 Gbps) can be selected to keep costs as low as possible within the requirement, compared to higher-tier circuits (e.g., 10 Gbps). Exclusion of Others: Site-to-site VPNs (route-based or policy-based): Both are limited to ~1.25 Gbps, failing the bandwidth need. Even aggregating multiple VPNs wouldn’t reliably reach 5 Gbps and would increase complexity/cost. Global VNet peering: Irrelevant for on-premises connectivity.
40
You have an Azure subscription. You deploy Active Directory domain controllers to Azure virtual machines. You plan to deploy Azure for SAP workloads. You plan to segregate the domain controllers from the SAP systems by using different virtual networks. You need to recommend a solution to connect the virtual networks. The solution must minimize costs. What should you recommend? a site-to-site VPN virtual network peering user-defined routing ExpressRoute
Final Answer: Virtual network peering Why Correct? Cost Minimization: VNet peering within the same region has no additional cost beyond standard Azure networking (free for intra-region traffic in most cases), unlike VPN or ExpressRoute, which require gateways or circuits with ongoing fees. Direct Connectivity: Peering provides a low-latency, high-bandwidth connection between the AD VNet and SAP VNet, meeting the need to link segregated networks. SAP on Azure Fit: For SAP workloads, AD integration is common (e.g., for SAP GUI logins), and peering is the recommended approach in Azure’s SAP architecture guidance to connect infrastructure and application VNets. AZ-120 Alignment: The exam tests cost-effective networking solutions for SAP deployments. VNet peering is highlighted in Azure documentation for SAP workloads as the simplest, cheapest way to connect VNets in this scenario. Simplicity: No additional resources (e.g., gateways) are needed, aligning with the requirement for a straightforward, low-cost solution.
41
HOTSPOT You have an on-premises SAP ERP Central Component (SAP ECC) deployment on servers that run Windows Server 2016 and have Microsoft SQL Server 2016 installed. You plan to migrate the deployment to Azure. You need to identify which migration method and migration option to use. The solution must minimize downtime of the SAP ECC deployment. What should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Migration method: Classical migration SAP Database Migration Option (DMO) SAP Database Migration Option (DMO) with System Move Migration option: Parallel Parallel export/import Sequential unload and load Serial
Correct Answer Migration method: SAP Database Migration Option (DMO) with System Move Migration option: Parallel export/import Migration method: SAP Database Migration Option (DMO) with System Move Why it’s correct: SAP DMO: The Database Migration Option, part of the Software Update Manager (SUM), is designed for SAP system migrations, combining database migration with updates (e.g., to newer SAP versions or HANA). While typically used for heterogeneous migrations (e.g., SQL Server to HANA), DMO supports homogeneous migrations like SQL Server to SQL Server. System Move: DMO with System Move extends DMO to allow the SAP system (application and database) to be exported from the source (on-premises) and imported to a new target environment (Azure VMs). It’s ideal for cloud migrations, as it facilitates moving the entire system to new hardware in Azure. Downtime Minimization: DMO optimizes the migration process by streamlining export/import and leveraging downtime optimization techniques (e.g., parallel processing, incremental exports). System Move allows pre-exporting data while the source system is running, reducing the downtime window during the final cutover to Azure. Migration option: Parallel export/import Why it’s correct: Parallel Export/Import: This option, available in DMO (and classical migrations), uses multiple R3load processes to export data from the source and import it to the target concurrently. It significantly reduces the overall migration time by parallelizing tasks. Downtime Impact: In DMO with System Move, parallel export/import allows data to be extracted and loaded faster, shrinking the downtime window during cutover (e.g., when the source system is stopped for final sync). For SAP ECC, which may have large databases and tables, parallelism leverages Azure’s compute resources (e.g., multi-core VMs) to speed up the process.
42
Case Study This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Overview Litware, Inc. is an international manufacturing company that has 3,000 employees. Litware has two main offices. The offices are located in Miami, FL, and Madrid, Spain. Existing Environment Infrastructure Litware currently uses a third-party provider to host a datacenter in Miami and a disaster recovery datacenter in Chicago, IL. The network contains an Active Directory domain named litware.com. Litware has two third-party applications hosted in Azure. Litware already implemented a site-to-site VPN connection between the on-premises network and Azure. SAP Environment Litware currently runs the following SAP products: Enhancement Pack6 for SAP ERP Central Component 6.0 (SAP ECC 6.0) SAP Extended Warehouse Management (SAP EWM) SAP Supply Chain Management (SAP SCM) SAP NetWeaver Process Integration (PI) SAP Business Warehouse (SAP BW) SAP Solution Manager All servers run on the Windows Server platform. All databases use Microsoft SQL Server. Currently, you have 20 production servers. You have 30 non-production servers including five testing servers, five development servers, five quality assurance (QA) servers, and 15 pre-production servers. Currently, all SAP applications are in the litware.com domain. Problem Statements The current version of SAP ECC has a transaction that, when run in batches overnight, takes eight hours to complete. You confirm that upgrading to SAP Business Suite on HANA will improve performance because of code changes and the SAP HANA database platform. Litware is dissatisfied with the performance of its current hosted infrastructure vendor. Litware experienced several hardware failures and the vendor struggled to adequately support its 24/7 business operations. Requirements Business Goals Litware identifies the following business goals: Increase the performance of SAP ECC applications by moving to SAP HANA. All other SAP databases will remain on SQL Server. Move away from the current infrastructure vendor to increase the stability and availability of the SAP services. Use the new Environment, Health and Safety (EH&S) in Recipe Management function. Ensure that any migration activities can be completed within a 48-hour period during a weekend. Planned Changes Litware identifies the following planned changes: Migrate SAP to Azure. Upgrade and migrate SAP ECC to SAP Business Suite on HANA Enhancement Pack 8. Technical Requirements Litware identifies the following technical requirements: Implement automated backups. Support load testing during the migration. Identify opportunities to reduce costs during the migration. Continue to use the litware.com domain for all SAP landscapes. Ensure that all SAP applications and databases are highly available. Establish an automated monitoring solution to avoid unplanned outages. Remove all SAP components from the on-premises network once the migration is complete. Minimize the purchase of additional SAP licenses. SAP HANA licenses were already purchased. Ensure that SAP can provide technical support for all the SAP landscapes deployed to Azure. You plan to migrate an SAP HANA instance to Azure. You need to gather CPU metrics from the last 24 hours from the instance. Solution: You use DBA Cockpit from SAP GUI. Does this meet the goal? A. Yes B. No
Final Answer A. Yes Why DBA Cockpit Meets the Goal DBA Cockpit Overview: DBA Cockpit is a web-based or SAP GUI-based administration tool for managing SAP databases, including SAP HANA. It provides performance monitoring, configuration, and diagnostics. For SAP HANA, DBA Cockpit offers detailed insights into system performance, including CPU usage, memory, disk I/O, and more. CPU Metrics Capability: DBA Cockpit can display historical performance data for SAP HANA, including CPU utilization, typically stored in HANA’s statistics server (_SYS_STATISTICS schema). It supports viewing metrics over a specified time range (e.g., last 24 hours) through its performance monitoring views (e.g., “Performance” or “System Load” tabs). Accessible via SAP GUI (e.g., transaction DBACOCKPIT), it connects to the HANA database to retrieve metrics without requiring Azure-specific tools (since the instance is likely still on-premises). Context Fit: Since the HANA instance is being planned for migration, it’s likely on-premises (e.g., part of SAP BW or a test system). DBA Cockpit is available in the current SAP landscape (e.g., via SAP Solution Manager or ECC) and can monitor HANA instances. No Azure tools (e.g., Azure Monitor) are relevant yet, as the instance hasn’t migrated. Meeting the Goal: DBA Cockpit directly provides 24-hour CPU metrics through its monitoring interface, fulfilling the requirement efficiently. It’s a standard SAP tool, requiring no additional setup beyond existing access to the HANA system.
43
Case Study This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Overview Litware, Inc. is an international manufacturing company that has 3,000 employees. Litware has two main offices. The offices are located in Miami, FL, and Madrid, Spain. Existing Environment Infrastructure Litware currently uses a third-party provider to host a datacenter in Miami and a disaster recovery datacenter in Chicago, IL. The network contains an Active Directory domain named litware.com. Litware has two third-party applications hosted in Azure. Litware already implemented a site-to-site VPN connection between the on-premises network and Azure. SAP Environment Litware currently runs the following SAP products: Enhancement Pack6 for SAP ERP Central Component 6.0 (SAP ECC 6.0) SAP Extended Warehouse Management (SAP EWM) SAP Supply Chain Management (SAP SCM) SAP NetWeaver Process Integration (PI) SAP Business Warehouse (SAP BW) SAP Solution Manager All servers run on the Windows Server platform. All databases use Microsoft SQL Server. Currently, you have 20 production servers. You have 30 non-production servers including five testing servers, five development servers, five quality assurance (QA) servers, and 15 pre-production servers. Currently, all SAP applications are in the litware.com domain. Problem Statements The current version of SAP ECC has a transaction that, when run in batches overnight, takes eight hours to complete. You confirm that upgrading to SAP Business Suite on HANA will improve performance because of code changes and the SAP HANA database platform. Litware is dissatisfied with the performance of its current hosted infrastructure vendor. Litware experienced several hardware failures and the vendor struggled to adequately support its 24/7 business operations. Requirements Business Goals Litware identifies the following business goals: Increase the performance of SAP ECC applications by moving to SAP HANA. All other SAP databases will remain on SQL Server. Move away from the current infrastructure vendor to increase the stability and availability of the SAP services. Use the new Environment, Health and Safety (EH&S) in Recipe Management function. Ensure that any migration activities can be completed within a 48-hour period during a weekend. Planned Changes Litware identifies the following planned changes: Migrate SAP to Azure. Upgrade and migrate SAP ECC to SAP Business Suite on HANA Enhancement Pack 8. Technical Requirements Litware identifies the following technical requirements: Implement automated backups. Support load testing during the migration. Identify opportunities to reduce costs during the migration. Continue to use the litware.com domain for all SAP landscapes. Ensure that all SAP applications and databases are highly available. Establish an automated monitoring solution to avoid unplanned outages. Remove all SAP components from the on-premises network once the migration is complete. Minimize the purchase of additional SAP licenses. SAP HANA licenses were already purchased. Ensure that SAP can provide technical support for all the SAP landscapes deployed to Azure. You plan to migrate an SAP ERP Central Component (SAP ECC) production system to Azure. You are reviewing the SAP EarlyWatch Alert report for the system. You need to recommend sizes for the Azure virtual machines that will host the system. Which two sections of the report should you review? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Hardware Capacity B. Patch Levels under SAP Software Configuration C. Hardware Configuration under Landscape D. Database and ABAP Load Optimization E. Data Volume Management
Final Answer A. Hardware Capacity D. Database and ABAP Load Optimization Why Each Section Is Correct A. Hardware Capacity: Purpose: The Hardware Capacity section analyzes the system’s resource utilization, including CPU, memory, and disk performance, based on historical workload data (e.g., peak and average usage). Relevance for Sizing: Provides metrics like CPU consumption (e.g., SAPS—SAP Application Performance Standard), memory usage, and I/O patterns for the ECC system (application and database). Critical for determining the Azure VM type (e.g., M-series for HANA, D-series for application) and size (e.g., vCPUs, RAM) to handle production loads. Example: If the report shows high CPU usage for ABAP processes or database queries, you’d select a larger VM (e.g., M64s vs. M32ts). D. Database and ABAP Load Optimization: Purpose: This section details performance metrics for the database (currently SQL Server, but relevant for HANA planning) and ABAP workload, including query response times, transaction volumes, and bottlenecks. Relevance for Sizing: Database: Identifies database load (e.g., CPU, I/O for SQL Server), which helps estimate HANA requirements post-upgrade (HANA needs more memory but similar compute). For example, heavy SQL Server I/O suggests a VM with high-performance storage (e.g., Premium SSD). ABAP: Analyzes application server load (e.g., dialog steps, batch jobs), informing the number and size of application VMs (e.g., D-series for ASCS and dialog instances). Example: High ABAP batch load (noted as an 8-hour issue in the case study) indicates need for robust application VMs.
44
Case Study This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Overview Contoso, Ltd. is a manufacturing company that has 15,000 employees. The company uses SAP for sales and manufacturing. Contoso has sales offices in New York and London and manufacturing facilities in Boston and Seattle. Existing Environment Active Directory The network contains an on-premises Active Directory domain named ad.contoso.com. User email addresses use a domain name of contoso.com. SAP Environment The current SAP environment contains the following components: - SAP Solution Manager - SAP ERP Central Component (SAP ECC) - SAP Supply Chain Management (SAP SCM) - SAP application servers that run Windows Server 2008 R2 - SAP HANA database servers that run SUSE Linux Enterprise Server 12 (SLES 12) Problem Statements Contoso identifies the following issues in its current environment: - The SAP HANA environment lacks adequate resources. - The Windows servers are nearing the end of support. - The datacenters are at maximum capacity. Requirements Planned Changes Contoso identifies the following planned changes: - Deploy Azure Virtual WAN. - Migrate the application servers to Windows Server 2016. - Deploy ExpressRoute connections to all of the offices and manufacturing facilities. - Deploy SAP landscapes to Azure for development, quality assurance, and production. All resources for the production landscape will be in a resource group named SAP Production. Business goals Contoso identifies the following business goals: - Minimize costs whenever possible. - Migrate SAP to Azure without causing downtime. - Ensure that all SAP deployments to Azure are supported by SAP. - Ensure that all the production databases can withstand the failure of an Azure region. - Ensure that all the production application servers can restore daily backups from the last 21 days. Technical Requirements Contoso identifies the following technical requirements: - Inspect all web queries. - Deploy an SAP HANA cluster to two datacenters. - Minimize the bandwidth used for database synchronization. - Use Active Directory accounts to administer Azure resources. - Ensure that each production application server has four 1-TB data disks. - Ensure that an application server can be restored from a backup created during the last five days within 15 minutes. - Implement an approval process to ensure that an SAP administrator is notified before another administrator attempts to make changes to the Azure virtual machines that host SAP. It is estimated that during the migration, the bandwidth required between Azure and the New York office will be 1 Gbps. After the migration, a traffic burst of up to 3 Gbps will occur. Proposed Backup Policy An Azure administrator proposes the backup policy shown in the following exhibit. Policy name: ✅ SapPolicy Backup schedule Frequency: Daily Time: 3:30 AM Timezone: (UTC) Coordinated Universal Time Instant Restore Retain instant recovery snapshot(s) for 5 Day(s) Retention range ✅ Retention of daily backup point At: 3:30 AM For: 14 Day(s) ✅ Retention of weekly backup point On: Sunday At: 3:30 AM For: 8 Week(s) ✅ Retention of monthly backup point Week Based - Day Based On: First Sunday At: 3:30 AM For: 12 Month(s) ✅ Retention of yearly backup point Week Based - Day Based In: January On: First Sunday At: 3:30 AM For: 7 Year(s) An Azure administrator provides you with the Azure Resource Manager template that will be used to provision the production application servers. { "apiVersion": "2017-03-30", "type": "Microsoft.Compute/virtualMachines", "name": "[parameters('vmname')]", "location": "EastUS", "dependsOn": [ "[resourceId('Microsoft.Network/networkInterfaces/', parameters('vmname'))]" ], "properties": { "hardwareProfile": { "vmSize": "[parameters('vmSize')]" }, "osProfile": { "computerName": "[parameters('vmname')]", "adminUsername": "[parameters('adminUsername')]", "adminPassword": "[parameters('adminPassword')]" }, "storageProfile": { "ImageReference": { "publisher": "MicrosoftWindowsServer", "offer": "WindowsServer", "sku": "2016-datacenter", "version": "latest" }, "osDisk": { "name": "[concat(parameters('vmname'), '-OS')]", "caching": "ReadWrite", "createOption": "FromImage", "diskSizeGB": 128, "managedDisk": { "storageAccountType": "[parameters('storageAccountType')]" } }, "copy": [ { "name": "DataDisks", "count": "[parameters('diskCount')]", "input": { "caching": "None", "diskSizeGB": 1024, "lun": "[copyIndex('datadisks')]" } } ] } } } You deploy an SAP environment on Azure. Your company has a Service Level Agreement (SLA) of 99.99% for SAP. You implement Azure Availability Zones that have the following components: Redundant SAP application servers ASCS/ERS instances that use a failover cluster Database high availability that has a primary instance and a secondary instance You need to validate the high availability configuration of the ASCS/ERS cluster. What should you use? A. SAP Web Dispatcher B. Azure Traffic Manager C. SAPControl D. SAP Solution Manager
Final Answer C. SAPControl Why SAPControl Is Correct ASCS/ERS Cluster Overview: ASCS (ABAP SAP Central Services) and ERS (Enqueue Replication Server) provide critical SAP services (message server, enqueue server) for SAP NetWeaver-based systems like ECC. In Azure Availability Zones, ASCS/ERS is deployed as a failover cluster (e.g., Windows Server Failover Clustering or Linux Pacemaker on SLES 12) across zones to achieve HA, ensuring 99.99% uptime. The cluster uses shared storage (e.g., Azure Shared Disks) or replication to maintain enqueue state, with automatic failover if the primary ASCS instance fails. SAPControl Overview: SAPControl is a command-line and SOAP-based interface (sapcontrol) for managing and monitoring SAP instances, including ASCS, ERS, application servers, and databases. It interacts with the SAP host agent (saphostagent) on each VM to check instance status, configuration, and HA settings. Validation Role: HA Validation: SAPControl can verify the ASCS/ERS cluster’s configuration and status, such as: Checking if ASCS and ERS instances are running (sapcontrol -nr -function GetProcessList). Confirming enqueue replication status (sapcontrol -nr -function EnqGetStatistic). Testing failover by simulating a failure (sapcontrol -nr -function StopService, then checking if ERS takes over).
45
You have an SAP production landscape in Azure. You plan to migrate the landscape to an SAP RISE managed workload. Which task will be the sole responsibility of the SAP vendor after the migration? Aconfiguring security monitoring in Microsoft Sentinel Bsizing the Azure resources that host the landscape Cimplementing single sign-on (SSO) Dconfiguring virtual network peering
Final Answer B. sizing the Azure resources that host the landscape Why Sizing Azure Resources Is Correct Task: Sizing the Azure resources that host the landscape: Definition: Sizing involves determining the appropriate VM types (e.g., Esv5-series for HANA), disk configurations (e.g., Premium SSD), and resource allocations (CPU, memory, storage) for the SAP landscape (e.g., ECC, HANA). SAP’s Role: In SAP RISE, SAP is solely responsible for designing and sizing the Azure infrastructure to meet SAP performance and SLA requirements (e.g., 99.99% uptime). SAP uses tools like SAP Quick Sizer and Azure sizing guidelines (SAP Note 1928533) to select certified VMs and storage (e.g., M-series for HANA). Post-migration, SAP adjusts resources dynamically (e.g., scaling VMs for load) within their subscription. Customer’s Role: Customers have no control over SAP’s Azure subscription in RISE. They cannot size or modify VMs, disks, or other resources hosting the SAP landscape. Contoso Fit: For Contoso’s ECC and SCM, SAP sizes VMs (e.g., Dsv5 for app servers, Mv2 for HANA) and storage (e.g., Ultra Disk for HANA logs). This task is fully owned by SAP post-migration, aligning with RISE’s managed model.
46
HOTSPOT - You have the following Azure Resource Manager template. { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {}, "resources": [ { "apiVersion": "2016-01-01", "type": "Microsoft.Storage/storageAccounts", "name": "[concat(copyIndex(), 'storage', uniqueString(resourceGroup().id))]", "location": "[resourceGroup().location]", "sku": { "name": "Premium_LRS" }, "kind": "Storage", "properties": {}, "copy": { "name": "storagecopy", "count": 6, "mode": "Serial", "batchSize": 1 } } ] } For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements Six storage accounts will be created. The storage accounts will be created in parallel. The storage accounts will be replicated to multiple regions.
Correct Answers Six storage accounts will be created: Yes The storage accounts will be created in parallel: No The storage accounts will be replicated to multiple regions: No Statement 1: Six storage accounts will be created Answer: Yes Why: The copy block specifies "count": 6, meaning the template will create six instances of the storage account resource. Each account has a unique name generated by concat(copyIndex(), 'storage', uniqueString(resourceGroup().id)), ensuring no naming conflicts (e.g., 0storage, 1storage, ..., 5storage). Statement 2: The storage accounts will be created in parallel Answer: No Why: The copy block includes "mode": "Serial", which forces the storage accounts to be created sequentially (one after another). Additionally, "batchSize": 1 confirms that only one account is created at a time before moving to the next. Parallel vs. Serial: Parallel creation (default for ARM templates) would occur if mode was "Parallel" or unspecified, creating all accounts simultaneously if dependencies allow. Serial mode is used when order matters or resources depend on prior creations, though no dependencies are specified here. Statement 3: The storage accounts will be replicated to multiple regions Answer: No Why: The SKU is specified as "name": "Premium_LRS", where LRS stands for Locally Redundant Storage. LRS replicates data three times within a single data center in one Azure region, not across multiple regions. Replication Options: ZRS (Zone-Redundant): Replicates across Availability Zones in one region. GRS (Geo-Redundant): Replicates to a secondary region (multiple regions). RA-GRS: Read-access GRS. Premium_LRS only supports local redundancy, not multi-region.
47
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements Azure AD Connect is required to sign into Linux virtual machines hosted in Azure. An SAP application server that runs on a Linux virtual machine in Azure must be joined to Active Directory. Before you can sign into an SAP application server that runs on a Linux virtual machine in Azure, you must create a Managed Service Identity (MSI).
Correct Answers Azure AD Connect is required to sign into Linux virtual machines hosted in Azure: No An SAP application server that runs on a Linux virtual machine in Azure must be joined to Active Directory: No Before you can sign into an SAP application server that runs on a Linux virtual machine in Azure, you must create a Managed Service Identity (MSI): No For the AZ-120 exam, understanding identity integration for SAP on Azure is critical: Statement 1 (No): Azure AD Connect is for hybrid identity, not Linux VM access, which uses SSH or Azure AD directly. Statement 2 (No): SAP on Linux doesn’t mandate AD join, offering flexibility in authentication, key for SAP’s platform-agnostic design. Statement 3 (No): Managed Identities are for service authentication, not user access, aligning with SAP’s operational model on Azure.
48
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements Oracle Data Guard can be used to provide high availability of SAP databases on Azure. You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016. You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12).
Correct Answers: Oracle Data Guard can be used to provide high availability of SAP databases on Azure: Yes You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016: Yes You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12): Yes
49
You are deploying SAP Fiori to an SAP environment on Azure. You are configuring SAML 2.0 for an SAP Fiori instance named FPP that uses client 100 to authenticate to an Azure Active Directory (Azure AD) tenant. Which provider named should you use to ensure that the Azure AD tenant recognizes the SAP Fiori instance? A. https://FPP B. ldap://FPP C. https://FPP100 D. ldap://FPP-100
Answer Correct Answer: A. https://FPP Why “https://FPP” is Correct? Meets the Requirement: The provider name https://FPP (representing the Fiori instance’s URL, e.g., https://fpp.company.com) ensures Azure AD recognizes the SAP Fiori instance (FPP) as a service provider in the SAML 2.0 configuration. Enables SSO, allowing users to authenticate with Azure AD credentials and access Fiori. SAML Standard: SAML Entity IDs are typically HTTPS URLs reflecting the SP’s access point, without internal parameters like SAP client numbers. https://FPP matches the Fiori instance’s hostname, ensuring consistency between SAP and Azure AD configurations. SAP Fiori Fit: Aligns with SAP Fiori’s web-based architecture, where the frontend URL (e.g., https://fpp.company.com) serves as the SAML identifier. The client (100) is handled by the SAP backend (e.g., via user mapping or default client settings), not the SAML provider name. Azure AD Integration: Simplifies configuration by using a clean URL as the Entity ID, avoiding mismatches in SAML metadata. Supported by Azure AD’s Enterprise Applications for SAML SSO.
50
You have an SAP environment on Azure. Your on-premises network connects to Azure by using a site-to-site VPN connection. You need to alert technical support if the network bandwidth usage between the on-premises network and Azure exceeds 900 Mbps for 10 minutes. What should you use? A. Azure Extension for SAP B. Azure Network Watcher c. Azure Monitor D. NIPING
Final Answer C. Azure Monitor Why “Azure Monitor” is Correct? Meets the Requirement: Azure Monitor collects bandwidth metrics for the Azure VPN Gateway (e.g., GatewayS2SBandwidth, TunnelEgressBytes, TunnelIngressBytes), enabling monitoring of site-to-site VPN throughput. Supports metric alerts to notify technical support when bandwidth exceeds 900 Mbps for 10 minutes, using customizable conditions and action groups. SAP Environment Fit: Ensures the VPN connection between the on-premises network and Azure-based SAP environment (e.g., NetWeaver, HANA) is monitored for performance issues (e.g., saturation affecting SAP traffic). Critical for hybrid SAP deployments, where VPN bandwidth impacts application performance (e.g., RFC calls, replication).
51
You plan to migrate an SAP environment to Azure. You need to create a design to facilitate end-user access to SAP applications over the Internet, while restricting user access to the virtual machines of the SAP application servers. What should you include in the design? A. Configure a public IP address for each SAP application server B. Deploy an internal Azure Standard Load Balancer for incoming connections c. Use an SAP Web Dispatcher to route all incoming connections d . Configure point-to-site VPN connections for each user
Answer Correct Answer: C. Use an SAP Web Dispatcher to route all incoming connections Why “Use an SAP Web Dispatcher to route all incoming connections” is Correct? Meets the Requirements: End-User Access: The SAP Web Dispatcher provides a public-facing entry point (e.g., https://fiori.company.com) for Internet-based access to SAP applications (Fiori, SAP GUI), routing traffic to application servers. Restrict VM Access: Application server VMs are in a private subnet with no public IPs, accessible only via the Web Dispatcher. NSGs block direct access to VM management ports (e.g., RDP, SSH). SAP Environment Fit: The SAP Web Dispatcher is the SAP-recommended component for routing external traffic to SAP applications, supporting HTTP/HTTPS (Fiori) and DIAG/RFC (SAP GUI). Standard in SAP-on-Azure architectures, ensuring secure and scalable access to NetWeaver, S/4HANA, or Fiori. Security and Scalability: Isolates application servers in a private VNet, reducing attack surface. Supports load balancing and high availability with multiple Web Dispatchers. Integrates with Azure security (e.g., NSGs, Application Gateway WAF, Azure AD SSO). Implementation: Deploy Web Dispatcher VM in a public subnet (or behind Azure Load Balancer). Configure Web Dispatcher to route to application servers (e.g., PAS, AAS). Use NSGs to allow only Web Dispatcher traffic to application servers. Optionally, add Azure Application Gateway for WAF or Azure Firewall for additional security.
52
You have an SAP Cloud Platform subscription and an Azure Active Directory (Azure AD} tenant. You need to ensure that Azure AD users can access SAP Cloud App by using their Azure AD credentials. What should you configure? A. A conditional access policy B. Active Directory Domain Services (AD DS} c. SAP Cloud Connector D. SAP Cloud Platform Identity Authentication
Answer Correct Answer: D. SAP Cloud Platform Identity Authentication Why Correct: Role: IAS acts as the authentication broker for SCP, enabling SSO by delegating authentication to Azure AD. Configuration: Configure IAS to trust Azure AD as a corporate IdP (import Azure AD SAML metadata). Register the SCP app in Azure AD (Enterprise Applications) and configure SAML SSO with IAS metadata. In the SCP subaccount, set IAS as the authentication service. Outcome: Users access SCP apps, are redirected to IAS, then to Azure AD for authentication, using their Azure AD credentials. SAP Fit: Standard for SCP/BTP applications, per SAP and Microsoft documentation.
53
You migrate an SAP environment to Azure. You need to inspect all the outbound traffic from the SAP application servers to the Internet. Which two Azure resources should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. Choose one or more answers A. Azure user-defined routes B. Network Performance Monitor C. Azure Traffic Manager D. a Web Application Firewall (WAF} far Azure Application Gateway E. Azure Load Balancer NAT rules F. Azure Firewall
A. Azure user-defined routes F. Azure Firewall
54
You have an SAP production landscape that uses SAP HANA databases on Azure. You need to deploy a disaster recovery solution to the SAP HANA databases. The solution must meet the following requirements: .Support failover between Azure regions. .Minimize data loss in the event of a failover. What should you deploy? A. HANA system replication that uses synchronous replication B. HANA system replication that uses asynchronous replication c. Always On availability group D. Azure Site Recovery
Correct Answer: A. HANA system replication that uses synchronous replication Why Correct? Meets Requirements: Failover between Azure regions: HANA system replication supports setting up a secondary HANA system in a different Azure region, enabling cross-region failover. Minimize data loss: Synchronous replication ensures zero data loss (RPO = 0) by committing transactions on both primary and secondary systems before completion, fully meeting the requirement.
55
You have an SAP production landscape that uses SAP HANA DB (databases) on Azure. The HANA DB server is a Standard_M32ms Azure VM that has 864 GB of RAM. The HANA DB is 400 GB. You expect the DB to grow by 40 percent during the next 12 months. You resize the HANA DB server VM to Standard_m64ms and 1,024 GB of RAM. You need to recommend additional changes to minimize performance degradation caused by DB growth. What should you recommend for HANA DB server? A. Add a secondary network interface. B. Configure additional disks. c. Add a scale out node. D. Increase the number of vCPUs.
Correct Answer: B. Configure additional disks Why Correct? Storage Bottleneck: A 40% database growth (from 400 GB to 560 GB) increases IOPS and throughput demands on HANA’s data and log volumes. The current disk configuration may not handle this growth, leading to performance degradation (e.g., slower queries or transaction delays). Azure Best Practices: For SAP HANA on Azure, Microsoft recommends configuring Premium SSDs or Ultra Disks with sufficient IOPS/throughput for growing databases. Adding disks (or upgrading to Ultra Disk) ensures storage performance scales with the database size. Alignment with Resize: The Standard_M64ms supports higher disk performance than the M32ms. Adding disks leverages this capability, optimizing the system holistically.
56
You deploy an SAP environment on Azure. Your company has a Service Level Agreement (SLA) of 99.99% for SAP. You implement Azure Availability Zones that have the following components: .Redundant SAP application servers .ASCS/ERS instances that use a failover cluster . Database high availability that has a primary instance and a secondary instance You need to initiate failover of the ASCS/ERS cluster. What should you use? A. SAP Web Dispatcher B. SAP Solution Manager c. SAPControl D. Azure Traffic Manager
C. SAPControl Why Correct: SAPControl is a command-line tool provided by SAP to manage and control SAP system instances, including the SAP Central Services (ASCS) and Enqueue Replication Server (ERS) instances. In an SAP high-availability setup on Azure, the ASCS/ERS instances are typically configured in a failover cluster (e.g., using Windows Server Failover Clustering or Linux Pacemaker). SAPControl allows administrators to manually initiate failover of the ASCS/ERS cluster by stopping or relocating the cluster resources, triggering the cluster to move the ASCS or ERS instance to another node in the cluster. This is critical for testing or managing failover scenarios to ensure the SLA of 99.99% is met.
57
You plan to deploy a highly available SAP HANA system in a scale-out configuration with standby to Azure virtual machines. You need to implement shared storage volumes by using Azure NetApp Files. What should you create first? A. a service endpoint B. a private endpoint c. a delegated subnet D. a private link
Correct Answer: C. A delegated subnet Why Correct? Meets Requirements: To use Azure NetApp Files for shared storage volumes in an SAP HANA scale-out deployment, you must first create a subnet in the VNet and delegate it to the Microsoft.NetApp/volumes service. This allows ANF to provision and manage storage volumes that can be mounted as NFS shares for SAP HANA file systems. The delegated subnet is the foundational step before creating ANF accounts, capacity pools, or volumes.
58
You migrate an on-premises instance of SAP HANA to HANA on Azure (Large Instances). You project that you will replace HANA Large Instances with two smaller instances in two years. You need to recommend a solution to minimize SAP HANA costs far the next three years. What should you include in the recommendation? A. Azure Hybrid Benefit B. a one-year reservation that has capacity priority c. a one-year reservation that has instance size flexibility D. a three-year reservation that has instance size flexibility
Correct Answer: D. A three-year reservation that has instance size flexibility Why Correct? Cost Minimization: A 3-year reservation offers the highest discount compared to 1-year reservations or pay-as-you-go pricing, significantly reducing costs for HANA Large Instances over the entire three-year period. Instance Size Flexibility: The flexibility to change instance sizes allows the reservation to apply to the current Large Instances for two years and then to the two smaller instances in year three, ensuring continuous savings despite the infrastructure change. Alignment with Plan: The solution accounts for the planned replacement in two years, avoiding the need to cancel or repurchase reservations, which could incur costs or complexity.
59
You have an SAP HANA on Azure (Large Instances) deployment that has two Type II SKU nodes. Each node is provisioned in a separate Azure region. You need to monitor storage replication far the deployment. What should you use? A. rear B. xfsdump c. azacsnap D. tar
C. azacsnap Why Correct: azacsnap is a specialized Azure tool designed for managing and monitoring storage snapshots and replication for SAP HANA on Azure (Large Instances). It is specifically used to orchestrate snapshot-based backups and monitor storage replication between HANA Large Instance nodes, including those deployed across different Azure regions for high availability or disaster recovery (e.g., HANA System Replication or storage-based replication). The tool integrates with the Azure infrastructure and the storage systems used by HANA Large Instances (e.g., NetApp or IBM storage) to provide visibility into replication status, consistency, and health, ensuring data integrity and compliance with SAP HANA requirements. Relevance to SAP HANA on Azure: For HANA Large Instances, storage replication is a critical component of high availability and disaster recovery setups, especially when nodes are in separate regions. azacsnap provides commands to monitor replication status, validate snapshot consistency, and manage failover scenarios. Microsoft documentation explicitly recommends azacsnap for snapshot management and replication monitoring in HANA Large Instance deployments.
60
You plan to deploy an SAP landscape to Azure VM's. The landscape will contain three VM's that will have the SAP Web Dispatcher role. The VM's will be in the same availability set. You need to configure a traffic distribution solution far the Web Dispatcher. The solution must meet the fallowing requirements: .Users must be able to access the Web Dispatcher URL in the event of a VM outage. . Costs must be minimized. What should you use? A. Basic Azure Load Balancer B. Azure Web Application Firewall (WAF) c. Azure Application Gateway D. Azure Standard Load Balancer
Correct Answer: A. Basic Azure Load Balancer Why Correct? Meets Requirements: High availability during VM outage: The Basic Load Balancer uses health probes to monitor the SAP Web Dispatcher VMs and automatically routes traffic to healthy instances if a VM fails. This ensures users can access the Web Dispatcher URL even during an outage. Cost minimization: The Basic Load Balancer is free, making it the most cost-effective option compared to the Standard Load Balancer or Application Gateway, which incur charges.
60
You have an SAP production landscape on Azure that uses a two-node Pacemaker cluster. You need to ensure that the cluster automatically fails over far Azure scheduled events. What should you configure on each node? A. the Azure VM extension far SAP B. the Azure Monitor agent C. the Azure fence agent D. the Linux diagnostics extension (LAD)
Correct Answer: C. The Azure fence agent Why Correct? Direct Integration with Pacemaker: The Azure fence agent is specifically designed to integrate Pacemaker with Azure’s infrastructure. It allows Pacemaker to query Azure scheduled events through the Azure Metadata Service and trigger a failover when a node is about to be impacted by maintenance (e.g., a reboot). Failover Enablement: By configuring the Azure fence agent on each node, Pacemaker can isolate a node (e.g., by powering it off or marking it as unavailable) and move SAP resources to the other node, ensuring high availability during Azure scheduled events. Azure SAP HA Best Practice: For SAP workloads on Azure, Microsoft recommends using the Azure fence agent in Pacemaker clusters to handle fencing and failover scenarios, including planned maintenance events. This is documented in Azure’s SAP HA guides.
61
You have two Azure virtual machines named VMl and VM2. VML hosts a single container database (SDC) far SAP HANA instance named sdl. VM2 hosts an SDC HANA instance named sd2. Azure Backup is enabled far the HANA databases on VM1 and VM2. You need to restore sd1 to sd2 and overwrite the database instance on VM2. What should you do first in the Azure portal? A. Upgrade sd2 to Multiple Container Database (MDC). B. From Restare Configuration, set Restored DB Name to sd1(sdc). C. From Restare Configuration, set Restored DB Name to sd2(sdc). d. Rename the SystemDB database of sd2.
B. From Restore Configuration, set Restored DB Name to sd1(sdc) Why Correct: When restoring an SAP HANA database using Azure Backup, the Azure portal provides a "Restore Configuration" step in the restore workflow for SAP HANA databases. To overwrite the existing database instance (sd2) on VM2 with the backup of sd1 from VM1, you need to specify the target database name in the restore configuration. Since the goal is to restore sd1 and overwrite sd2, you select the restore point for sd1 and configure the "Restored DB Name" to match the source database name, sd1(sdc). This ensures that the restore operation uses the backup of sd1 to overwrite the database on VM2, effectively replacing sd2 with sd1. Azure Backup for SAP HANA supports this overwrite scenario by allowing the target database to be specified during the restore process. Relevance to SAP HANA on Azure: Azure Backup for SAP HANA is integrated with the Azure portal and uses the SAP HANA Backint interface to manage backups and restores. The restore process involves selecting the source database’s backup (from VM1) and configuring the target database on VM2. Specifying the correct database name (sd1(sdc)) ensures the restore operation targets the correct backup and overwrites the existing database instance (sd2) on VM2. This aligns with Microsoft’s recommended process for HANA database restoration in Azure.
62
You have an SAP production landscape in Azure that contains an SAP HANA database. You need to configure a Recovery Services vault that will be used to back up the HANA database server. The solution must ensure that the virtual machine and the HANA database can be restored manually to a paired region. What should you do first? A. Create a recovery plan. B. Enable Cross Region Restare. c. Configure zone-redundant storage (ZRS) replication far the storage. D. Create a Recovery Services vault in the paired region.
Correct Answer: D. Create a Recovery Services vault in the paired region Why Correct (with Caveat)? Meets Requirements: A Recovery Services vault is the foundational component for backing up the SAP HANA database server (VM and database). It must be created before any backup policies, recovery plans, or restore capabilities (like Cross Region Restore) can be configured. The requirement to restore to a paired region is supported by the vault’s default GRS replication, which copies backup data to the paired region, and by enabling Cross Region Restore after vault creation. Caveat on “Paired Region”: Standard Azure practice is to create the Recovery Services vault in the primary region where the SAP HANA VM is deployed. The vault uses GRS to replicate backup data to the paired region, enabling restore to that region. The phrasing of “in the paired region” in Option D is likely a misphrasing or a test distractor. In the absence of an option explicitly stating “create a vault in the primary region,” Option D is the closest, as creating the vault is the first step, and the paired region restore capability is achieved via GRS and CRR.
63
You deploy SAP HANA on Azure (Large Instances). You need to back up the SAP HANA database to Azure. Solution: You use a third-party tool that uses backing to back up the SAP HANA database to Azure storage. Does this meet the goal? A. no B. yes
Correct Answer: B. Yes Why Correct? Backint Integration: Backint is an SAP-certified API for HANA backups, ensuring that the third-party tool can properly back up the HANA database (data, logs, and catalog) while meeting SAP’s requirements for consistency and recovery. Azure Storage Compatibility: The solution explicitly states that backups are written to Azure storage, which aligns with using Azure Blob Storage, a supported and cost-effective option for HANA backups. Third-Party Tool: In the context of the AZ-120 exam, a third-party tool using Backint is assumed to be SAP-certified (e.g., Commvault, Veritas) unless otherwise specified. Such tools are designed to integrate HANA with Azure storage seamlessly. Large Instances: SAP HANA on Azure Large Instances supports Backint-based backups to Azure storage, with no unique restrictions compared to HANA on VMs.
64
You recently migrated an SAP HANA environment to Azure. You plan to back up SAP HANA databases to disk on the virtual machines, and then move the backup files to Azure Blob storage for retention. Which command should you run to move the backups to the Blob storage? A. robocopy B. scp c. backint D. azcopy
D. azcopy Why Correct: azcopy is a command-line utility provided by Microsoft specifically designed for copying data to and from Azure Blob storage, File storage, and other Azure storage services. It is highly efficient for transferring large files, such as SAP HANA database backup files, from an Azure virtual machine’s disk to Azure Blob storage. azcopy supports authentication with Azure Active Directory (AAD), Managed Identities, or Shared Access Signatures (SAS), making it secure and well-integrated with Azure’s ecosystem. It also provides features like resumable transfers, parallel uploads, and performance optimization, which are critical for handling large HANA backup files.
65
HOTSPOT You plan to migrate an SAP database from Oracle to Microsoft SQL Server by using the SQL Server Migration Assistant (SSMA). You are configuring a Proof of Concept (PoC) for the database migration. You plan to perform the migration multiple times as part of the PoC. You need to ensure that you can perform the migrations as quickly as possible. The solution must ensure that all Oracle schemas are migrated. Which migration method and migration mode should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Migration method: Script Synchronization Migration mode: Full Default Optimistic
Correct Answer: Migration method: Synchronization Migration mode: Default
66
HOTSPOT You have an on-premises deployment of SAP HANA that contains a production environment and a development environment. You plan to migrate both environments to Azure. You need to identify which Azure virtual machine-series to use for each environment. The solution must meet the following requirements: · Minimize costs. . Be SAP HANA-certified. What should you identify for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area SAP HANA Developer Edition: D-series M-series NC-series SAP S/4 HANA: D-series M-series NC-series
Correct Answer: SAP HANA Developer Edition: M-series SAP S/4HANA: M-series Why Correct? SAP HANA Certification: M-series VMs are explicitly certified by SAP for running HANA workloads (per SAP Note 1944799 and Azure’s SAP HANA documentation). They support both production (SAP S/4HANA) and non-production (Developer Edition) environments. D-series and NC-series are not certified for SAP HANA, as they lack the memory and storage optimizations required for HANA’s in-memory database. Cost Minimization: For the development environment, smaller M-series VMs (e.g., M8ms or M16ms) can be selected to minimize costs while still meeting HANA certification. These are more expensive than D-series but necessary for HANA support. For the production environment, M-series VMs are the most cost-effective option among HANA-certified VMs, as they are designed for HANA and avoid the higher costs of alternatives like HANA Large Instances. Workload Suitability: Development (SAP HANA Developer Edition): Requires less performance, so a smaller M-series VM is sufficient and cost-effective. Production (SAP S/4HANA): Demands high performance, making M-series the appropriate choice for reliability and scalability.
67
DRAG DROP You have an Azure tenant and an SAP Cloud Platform tenant. You need to ensure that users sign in automatically by using their Azure AD accounts when they connect to SAP Cloud Platform. Which four actions should you perform in sequence? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order. Actions Answer Area Configure the SAML settings for the Identifier and Reply URL. From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file. From the Azure Active Directory admin center, configure the SAP Cloud Platform Identity app to use the Federation Metadata XML file. From the Azure Active Directory admin center, download the Federation Metadata XML file. From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app.
Correct Sequence: From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app. From the Azure Active Directory admin center, download the Federation Metadata XML file. From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file. Configure the SAML settings for the Identifier and Reply URL.
68
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE Each correct selection is worth one point. Answer Area Statements SAP supports both SAP HANA backup and storage snapshot options. Before you can back up an SAP HANA database by using the snapshot option, you must stop the Azure virtual machines. To ensure SAP HANA data consistency when taking storage snapshots, you must freeze the file system.
Conclusion: The correct selections are: Statement 1: Yes Statement 2: No Statement 3: Yes