test2 Flashcards

(78 cards)

1
Q

This question requires that you evaluate the underlined BOLD text to determine if it is correct.

You have an Azure resource group that contains the virtual machines for an SAP environment.

You must be assigned the Contributor role to grant permissions to the resource group.

Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct.

No change is needed
User Access Administrator
Managed Identity Contributor
Security Admin

A

The correct answer is: User Access Administrator

Explanation:
The underlined bold text states: “You must be assigned the Contributor role to grant permissions to the resource group.” This is incorrect. The Contributor role in Azure allows a user to manage resources (e.g., create, delete, or modify virtual machines, storage, etc.) within a resource group, but it does not grant the ability to assign permissions or roles to others. In Azure Role-Based Access Control (RBAC), the ability to grant permissions (i.e., assign roles) requires a role that includes permissions for managing access, such as the Microsoft.Authorization/roleAssignments/* action.

User Access Administrator: This role allows a user to manage access to Azure resources by assigning roles to others. It includes the necessary permissions (e.g., Microsoft.Authorization/*) to grant permissions to a resource group. This makes it the correct choice for the statement to be accurate in the context of the Azure AZ-120 exam, which focuses on planning and administering Azure for SAP workloads. Managing access to resources like virtual machines in an SAP environment often involves assigning roles, and this role fits that requirement.
Contributor: As mentioned, this role can manage resources but cannot assign roles or grant permissions to others. Thus, the original statement is incorrect.
Managed Identity Contributor: This role is specific to managing user-assigned managed identities and does not provide broad permissions to grant access to a resource group. It’s too narrow for this scenario.
Security Admin: This role is related to managing security policies and configurations in Microsoft Defender for Cloud, not for granting permissions to resource groups in the context of RBAC.
Why “User Access Administrator” is correct:
In the context of the AZ-120 exam, which deals with SAP workloads on Azure, you might need to grant permissions to a resource group containing virtual machines to ensure proper management of the SAP environment. The User Access Administrator role aligns with this need because it allows you to delegate access by assigning roles (e.g., Contributor, Reader, or custom roles) to other users, groups, or service principals at the resource group scope. This is a common administrative task in Azure when setting up and securing SAP environments.

Thus, the corrected statement would be: “You must be assigned the User Access Administrator role to grant permissions to the resource group.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an SAP environment on Azure.

our on-promises network connects to Azure by using a site-to-site VPN connection.

6u need to alert technical support if the network bandwidth usage between the on-premises network and Azure exceeds 900 Mbps 10 minutes.

What should you use?

Azure Network Watcher
NIPING
Azure Monitor
Azure Enhanced Monitoring for SAP

A

Correct Answer: Azure Monitor
Why It’s Correct:
Azure Monitor can collect real-time network performance metrics from the site-to-site VPN (via the Virtual Network Gateway), set a custom threshold of 900 Mbps, evaluate it over a 10-minute window, and trigger alerts to technical support. This meets all the requirements of the question.
Azure Network Watcher provides diagnostic tools but lacks native alerting for bandwidth thresholds. NIPING is an SAP-specific latency tool, not a monitoring solution. Azure Enhanced Monitoring for SAP focuses on SAP application metrics, not VPN bandwidth.
For the AZ-120 exam, Azure Monitor is the standard Azure service for monitoring and alerting in SAP-on-Azure deployments, making it the closest and most correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an SAP production landscape on-premises and an SAP development landscape on Azure.

You deploy a network virtual appliance to act as a firewall between the Azure subnets and the on-premises network.

You need to ensure that all traffic is routed through the network virtual appliance.

Solution: You create an Azure Traffic Manager profile.

Does this meet the goal?

Yes
No

A

The correct answer is: No

Explanation:
The goal is to ensure that all traffic between the Azure subnets and the on-premises network is routed through the network virtual appliance (NVA), which is acting as a firewall. Let’s evaluate the proposed solution: creating an Azure Traffic Manager profile.

Azure Traffic Manager: This is a DNS-based traffic routing service that distributes traffic across multiple endpoints (e.g., Azure regions, external endpoints) based on policies like performance, geographic location, or priority. It operates at the application layer (Layer 7) and is primarily used for load balancing and failover scenarios across globally distributed endpoints. It does not control or route network traffic at the subnet or infrastructure level between Azure and an on-premises network through an NVA.
Requirement Analysis: To force all traffic through an NVA acting as a firewall, you need a solution that operates at the network layer (Layer 3/4) and can enforce routing rules. In Azure, this is typically achieved using User-Defined Routes (UDRs) in a route table. UDRs allow you to override Azure’s default system routes and direct traffic from one subnet (e.g., Azure subnets hosting the SAP development landscape) to the NVA’s IP address before it reaches the on-premises network via a VPN or ExpressRoute connection. Traffic Manager does not provide this capability.
Why “No” is correct:
Creating an Azure Traffic Manager profile does not address the requirement of routing all traffic through the NVA. Traffic Manager is designed for endpoint management and load balancing, not for enforcing network-level routing or firewall policies between subnets and on-premises networks.
In the context of the AZ-120 exam (focused on SAP on Azure), ensuring secure and controlled traffic flow between an on-premises SAP production landscape and an Azure-based SAP development landscape often involves network appliances like NVAs. The correct approach would involve configuring UDRs in a route table associated with the Azure subnets to point traffic to the NVA, not using Traffic Manager.
Correct Solution (for context):
While the question only asks about the proposed solution, the correct approach would be to:

Deploy the NVA in a dedicated subnet.
Create a route table with UDRs that set the next hop to the NVA’s IP address for traffic destined to the on-premises network or other Azure subnets.
Associate the route table with the relevant Azure subnets hosting the SAP development landscape.
Thus, the solution “Create an Azure Traffic Manager profile” does not meet the goal, making No the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You plan to migrate an on-premises SAP environment to Azure.

You need to identity whether any SAP application servers host multiple SAP system identifiers (SlDs).

What should you do?

Run SAP HAN A sizing report.
From the SAP EarlyWatch Alert report, compare the physical host names to the virtual host names.
Run the SAP Report from ABAPMeter.
From the SAP EarlyWatch Alert report, compare the services to the reference objects

A

Why “From the SAP EarlyWatch Alert report, compare the physical host names to the virtual host names” is correct:
In the context of the AZ-120 exam (Planning and Administering Microsoft Azure for SAP Workloads), understanding the existing SAP landscape is critical before migration. The EWA report is a commonly used tool in SAP environments to gather system configuration details. Comparing physical and virtual hostnames in the report is a practical method to detect if a single server (physical hostname) is running multiple SAP instances (each potentially with a unique SID, tied to virtual hostnames). This aligns with SAP best practices for landscape discovery and Azure migration planning.
For example, in the EWA report, the “System Configuration” or “Host Overview” section might show a physical server “SAPHOST01” with virtual hostnames “PRD_VHOST” (SID: PRD) and “DEV_VHOST” (SID: DEV), indicating multiple SIDs on one server.
Thus, the correct answer is: From the SAP EarlyWatch Alert report, compare the physical host names to the virtual host names.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

HOTSPOT

You have ah SAP environment on Azure that contains a single-tenant SAP NANA server at instance 03.

You need to monitor the network throughput from an SAP application server to the SAP HANA server.

How should you complete the script? To answer, select the appropriate options in the answer are. NOTE: Each correct selection is worth one point.
Answer Area

Answer Area

$HANA = [Dropdown]
Get-AzNetworkInterface
Get-AzNetworkUsage
Get-AzNetworkWatcher
Get-AzVM
-Name HANA01-NIC -ResourceGroupName Production

$APP = Get-AzVM -Name AppP01 -ResourceGroupName Production

New-AzNetworkWatcherConnectionMonitor -NetworkWatcher (Get-AzNetworkWatcher)
-Name HANA -DestinationAddress (($HANA).IpConfigurations.PrivateIPAddress)
-DestinationPort [Dropdown]
1433
1434
30115
30315
-SourceResourceId $APP.Id

A

Correct Answer: Get-AzNetworkInterface

Reasoning:

The variable $HANA is later used in the script to access the HANA server’s private IP address via $HANA.IpConfigurations.PrivateIPAddress. This indicates that $HANA must represent the network interface of the HANA server (HANA01-NIC), as the IpConfigurations property is part of the PSNetworkInterface object returned by Get-AzNetworkInterface.
Get-AzNetworkInterface retrieves the properties of a network interface (NIC) in Azure, including its IP configuration, which is exactly what’s needed here to specify the destination IP address for the connection monitor.
Other options:
Get-AzNetworkUsage: This does not exist as a valid Azure PowerShell cmdlet.
Get-AzNetworkWatcher: Retrieves a Network Watcher resource, not a NIC or IP address.
Get-AzVM: Retrieves a virtual machine object, which could work indirectly (via its NIC), but the script explicitly uses -Name HANA01-NIC, matching the naming convention of a NIC, not a VM. Using Get-AzVM wouldn’t directly provide the IpConfigurations property in this context without additional steps.
Thus, $HANA = Get-AzNetworkInterface -Name HANA01-NIC -ResourceGroupName Production is correct.

Second Dropdown: -DestinationPort [Dropdown]
Options:

1433
1434
30115
30315
Correct Answer: 30315

Reasoning:

The script is monitoring network throughput to an SAP HANA server running as instance 03 (single-tenant). In SAP HANA, the port used for communication depends on the instance number and the type of connection. The standard port convention for SAP HANA SQL access (via JDBC/ODBC) is 3<instance>15, where <instance> is the two-digit instance number.
For instance 03:
The port is calculated as 3<03>15 = 30315.
This port is used by the SAP HANA database for client connections from the SAP application server (e.g., ABAP or Java stack) to the HANA database’s SQL interface.
Other options:
1433: Default port for Microsoft SQL Server, not SAP HANA.
1434: SQL Server Browser Service port, unrelated to SAP HANA.
30115: This would correspond to HANA instance 01 (3<01>15), not instance 03.</instance></instance>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

DRAG DROP

Your on-premises network contains an Active Directory domain.

You are deploying a new SAP environment on Azure.

You need to configure SAP Single Sign-On to ensure that users can authenticate lo SAP GUI and SAP WebGUI.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions Answer Area

Deploy Azure Active Directory Domain Services (Azure AD DS) and sync back to Active Directory.

Create an Azure Key Vault service endpoint.

Configure secure network communication (SNC) by using SNCWIZARD.

Change and deploy the logon file.

Change the user profiles for secure network communication (SNC.)

A

Final Answer Area:
Answer Area:
1. Deploy Azure Active Directory Domain Services (Azure AD DS) and sync back to Active Directory.
2. Configure secure network communication (SNC) by using SNCWIZARD.
3. Change the user profiles for secure network communication (SNC).
4. Change and deploy the logon file.

Correct Four Actions and Sequence:
The question asks for four actions, and we must exclude one. Since Create an Azure Key Vault service endpoint is not directly required for Kerberos-based SSO with AD (it’s more relevant for certificate-based scenarios or additional security), we’ll exclude it. The remaining four actions form a logical sequence:

Deploy Azure Active Directory Domain Services (Azure AD DS) and sync back to Active Directory
Why: This sets up the domain services in Azure, syncing with on-premises AD to provide Kerberos authentication. It’s the prerequisite for SAP SSO.
Order: First, as the identity foundation must be in place.
Configure secure network communication (SNC) by using SNCWIZARD
Why: SNCWIZARD configures the SAP system to use Kerberos via Azure AD DS for SSO. This integrates the SAP environment with the domain.
Order: Second, after Azure AD DS is available.
Change the user profiles for secure network communication (SNC)
Why: User profiles in SAP must be updated with SNC names (e.g., p:CN=username@domain.com) to map AD identities to SAP users. This ensures SSO works for each user.
Order: Third, after SNC is configured on the server side, user-specific settings are applied.
Change and deploy the logon file
Why: The SAP GUI client needs updated configuration (e.g., enabling SNC in the logon settings) to authenticate users without manual credentials. Deploying this ensures end-users can use SSO.
Order: Fourth, as this is the final client-side step after the SAP system and users are configured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You plan to deploy SAP application servers that run Windows Server 2016.

You need to use PowerShell Desired State Configuration (DSC) to configure the SAP application server once the servers are deployed.

Which Azure virtual machine extension should you install on the servers?

the Azure DSC VM Extension
the Azure virtual machine extension
the Azure Chef extension
the Azure Enhanced Monitoring Extension for SAP

A

Correct Answer: The Azure DSC VM Extension
Why It’s Correct:
The Azure DSC VM Extension enables PowerShell DSC on Azure VMs, allowing you to automate the configuration of SAP application servers running Windows Server 2016. It directly supports the requirement to “use PowerShell Desired State Configuration” by executing DSC scripts post-deployment.
The other options either don’t exist (“Azure virtual machine extension”), use a different configuration tool (“Azure Chef extension”), or serve a monitoring purpose (“Azure Enhanced Monitoring Extension for SAP”), making them irrelevant to the task.
For the AZ-120 exam, the Azure DSC VM Extension is the standard solution for DSC-based configuration in Azure, aligning with SAP deployment automation best practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Case Study

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview

Contoso, Ltd. is a manufacturing company that has 15,000 employees.

The company uses SAP for sales and manufacturing.

Contoso has sales offices in New York and London and manufacturing facilities in Boston and Seattle.

Existing Environment

Active Directory

The network contains an on-premises Active Directory domain named ad.contoso.com. User email addresses use a domain name of contoso.com.

SAP Environment

The current SAP environment contains the following components:

  • SAP Solution Manager
  • SAP ERP Central Component (SAP ECC)
  • SAP Supply Chain Management (SAP SCM)
  • SAP application servers that run Windows Server 2008 R2
  • SAP HANA database servers that run SUSE Linux Enterprise Server 12 (SLES 12)

Problem Statements

Contoso identifies the following issues in its current environment:

  • The SAP HANA environment lacks adequate resources.
  • The Windows servers are nearing the end of support.
  • The datacenters are at maximum capacity.

Requirements

Planned Changes

Contoso identifies the following planned changes:

  • Deploy Azure Virtual WAN.
  • Migrate the application servers to Windows Server 2016.
  • Deploy ExpressRoute connections to all of the offices and manufacturing facilities.
  • Deploy SAP landscapes to Azure for development, quality assurance, and production.

All resources for the production landscape will be in a resource group named SAP Production.

Business goals

Contoso identifies the following business goals:

  • Minimize costs whenever possible.
  • Migrate SAP to Azure without causing downtime.
  • Ensure that all SAP deployments to Azure are supported by SAP.
  • Ensure that all the production databases can withstand the failure of an Azure region.
  • Ensure that all the production application servers can restore daily backups from the last 21 days.

Technical Requirements

Contoso identifies the following technical requirements:

  • Inspect all web queries.
  • Deploy an SAP HANA cluster to two datacenters.
  • Minimize the bandwidth used for database synchronization.
  • Use Active Directory accounts to administer Azure resources.
  • Ensure that each production application server has four 1-TB data disks.
  • Ensure that an application server can be restored from a backup created during the last five days within 15 minutes.
  • Implement an approval process to ensure that an SAP administrator is notified before another administrator attempts to make changes to the Azure virtual machines that host SAP.

It is estimated that during the migration, the bandwidth required between Azure and the New York office will be 1 Gbps. After the migration, a traffic burst of up to 3 Gbps will occur.

Proposed Backup Policy

An Azure administrator proposes the backup policy shown in the following exhibit.
Policy name:
✅ SapPolicy

Backup schedule
Frequency: Daily
Time: 3:30 AM
Timezone: (UTC) Coordinated Universal Time
Instant Restore
Retain instant recovery snapshot(s) for 5 Day(s)
Retention range
✅ Retention of daily backup point

At: 3:30 AM
For: 14 Day(s)
✅ Retention of weekly backup point

On: Sunday
At: 3:30 AM
For: 8 Week(s)
✅ Retention of monthly backup point

Week Based - Day Based
On: First Sunday
At: 3:30 AM
For: 12 Month(s)
✅ Retention of yearly backup point

Week Based - Day Based
In: January
On: First Sunday
At: 3:30 AM
For: 7 Year(s)

An Azure administrator provides you with the Azure Resource Manager template that will be used to provision the production application servers.
{
“apiVersion”: “2017-03-30”,
“type”: “Microsoft.Compute/virtualMachines”,
“name”: “[parameters(‘vmname’)]”,
“location”: “EastUS”,
“dependsOn”: [
“[resourceId(‘Microsoft.Network/networkInterfaces/’, parameters(‘vmname’))]”
],
“properties”: {
“hardwareProfile”: {
“vmSize”: “[parameters(‘vmSize’)]”
},
“osProfile”: {
“computerName”: “[parameters(‘vmname’)]”,
“adminUsername”: “[parameters(‘adminUsername’)]”,
“adminPassword”: “[parameters(‘adminPassword’)]”
},
“storageProfile”: {
“ImageReference”: {
“publisher”: “MicrosoftWindowsServer”,
“offer”: “WindowsServer”,
“sku”: “2016-datacenter”,
“version”: “latest”
},
“osDisk”: {
“name”: “[concat(parameters(‘vmname’), ‘-OS’)]”,
“caching”: “ReadWrite”,
“createOption”: “FromImage”,
“diskSizeGB”: 128,
“managedDisk”: {
“storageAccountType”: “[parameters(‘storageAccountType’)]”
}
},
“copy”: [
{
“name”: “DataDisks”,
“count”: “[parameters(‘diskCount’)]”,
“input”: {
“caching”: “None”,
“diskSizeGB”: 1024,
“lun”: “[copyIndex(‘datadisks’)]”
}
}
]
}
}
}

This question requires that you evaluate the underlined BOLD text to determine if it is correct.

You are planning for the administration of resources in Azure.

To meet the technical requirements, you must first implement Active Directory Federation Services (AD FS).

Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct.

No change is needed
Azure AD Connect
Azure AD join
Enterprise State Roaming

A

Correct Answer: Azure AD Connect
Why It’s Correct:
The technical requirement “Use Active Directory accounts to administer Azure resources” is best met by synchronizing on-premises AD with Azure AD using Azure AD Connect, allowing AD users to authenticate and manage Azure resources via RBAC.
AD FS (the underlined text) is a valid but unnecessarily complex alternative, requiring additional infrastructure without clear justification in the case study. Azure AD Connect is the standard, efficient solution for this scenario.
For the AZ-120 exam, Azure AD Connect is the expected answer for hybrid identity in SAP-on-Azure environments unless federation-specific needs are specified, making it the correct choice to replace the underlined text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You are planning the Azure network infrastructure to support the disaster recovery requirements.

What is the minimum number of virtual networks required for the SAP deployed?

1
2
3
4

A

Correct Answer: 2
Why It’s Correct:
The minimum number of VNets required is 2: one for the primary region (hosting the production SAP HANA databases and application servers) and one for the DR region (hosting the replicated HANA databases and standby app servers). This satisfies the technical requirement to “ensure that all the production databases can withstand the failure of an Azure region” via SAP HANA system replication across regions.
A single VNet (1) can’t span regions, while 3 or 4 VNets exceed the minimum needed for production DR, especially given the business goal to “minimize costs whenever possible.”
For the AZ-120 exam, a two-VNet setup is the simplest, SAP-supported architecture for region-level DR, making 2 the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which Azure service should you deploy for the approval process to meet the technical requirements?

Just in time (JIT) VM access
Azure Active Directory (Azure AD) Identity Protection
Azure Active Directory (Azure AD) Privileged identity Manager (PIM)
Azure Active Directory (Azure AD) conditional access

A

Correct Answer: Azure Active Directory (Azure AD) Privileged Identity Manager (PIM)
Why It’s Correct:
Azure AD PIM implements an approval process for privileged actions, ensuring an SAP administrator is notified and must approve before another administrator can make changes to Azure VMs hosting SAP. It directly addresses the technical requirement by controlling RBAC permissions, which govern all VM modifications (not just login access).
JIT VM Access is limited to VM port access, not broader changes, while Identity Protection and Conditional Access lack approval workflows. PIM’s scope and notification capabilities make it the most precise match.
For the AZ-120 exam, PIM is the expected solution for privileged access management in SAP-on-Azure deployments, making it the correct choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

HOTSPOT

You are planning replication of the SAP HANA database for the disaster recovery environment in Azure.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Answer Area

Statements Yes No

You must use synchronous replication. ( ) ( )

You must use delta data shipping for operation mode. ( ) ( )

You must configure an Azure Directory (Azure AD)
application to manage the failover. ( ) ( )

A

Correct Answers:
You must use synchronous replication: No
You must use delta data shipping for operation mode: No
You must configure an Azure Active Directory (Azure AD) application to manage the failover: Yes

Statement 1: “You must use synchronous replication.”
Evaluation:
Synchronous replication ensures zero data loss (RPO = 0) by writing data to both primary and secondary sites before committing transactions. However, it’s latency-sensitive and practical only for short distances (e.g., within a region or nearby availability zones).
The requirement “withstand the failure of an Azure region” implies replication across regions (e.g., hundreds or thousands of kilometers apart), where latency exceeds synchronous replication’s feasibility (typically <1-2 ms round-trip time).
SAP and Azure recommend asynchronous replication for cross-region DR to balance performance and distance, accepting a small RPO. The case study’s “minimize bandwidth” goal further supports asynchronous replication, as it uses less continuous bandwidth than synchronous.
Synchronous replication isn’t mandatory; asynchronous is a valid, supported option.
Answer: No
Why: Synchronous replication isn’t required for cross-region DR; asynchronous replication meets the region-failure requirement and aligns with bandwidth minimization.
Statement 2: “You must use delta data shipping for operation mode.”
Evaluation:
SAP HANA system replication supports multiple operation modes:
Delta data shipping: Periodically sends changed data blocks (e.g., every 10 minutes by default), reducing bandwidth compared to full sync.
Continuous log replay: Replicates transaction logs in near-real-time, offering lower RPO but higher bandwidth usage.
Full sync: Initial sync of all data, not an ongoing mode.
The requirement “minimize the bandwidth used for database synchronization” favors delta data shipping, as it sends only changes periodically rather than continuous log streams.
However, “must use” implies it’s the only option. While delta data shipping is advantageous here, SAP HANA also supports log replay for DR, and the choice depends on RPO/RTO needs (not specified beyond region failure). Log replay could be used if lower RPO is prioritized over bandwidth.
Given the bandwidth minimization requirement, delta data shipping is likely intended, but it’s not strictly mandatory—other modes are technically viable.
Answer: No (with caveat)
Why: Delta data shipping aligns with bandwidth minimization, but “must” overstates it; log replay is also an option. However, in AZ-120 exam context, “Yes” could be intended if delta is the expected best practice here. I’ll lean No for precision, as it’s not the only mode.
Statement 3: “You must configure an Azure Active Directory (Azure AD) application to manage the failover.”
Evaluation:
Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

HOTSPOT

You need to provide the Azure administrator with the values to complete the Azure Resource Manager template.

Which values should you provide for diskCount, StorageAccountType, and domainName? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
diskCount:
0
1
2
4

storageAccountType:
Premium_LRS
Standard_GRS
Standard_LRS

domainName:
ad.contoso.com
ad.contoso.onmicrosoft.com
contoso.com
contoso.onmicrosoft.com

A

Box 1: 4
Scenario: the Azure Resource Manager template that will be used to provision the production application servers.
Ensure that each production application server has four 1-TB data disks.
Box 2: Standard_LRS
Scenario: Minimize costs whenever possible.
Box 3: contoso.onmicrosoft.com
The network contains an on-premises Active Directory domain named ad.contoso.com.
The Initial domain: The default domain (onmicrosoft.com) in the Azure AD Tenant. For example, contoso.onmicrosoft.com.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

HOTSPOT

Before putting the SAP environment on Azure into production, which command should you run to ensure that the virtual machine disks meet the business requirements? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Get-AzDisk
Get-AzVM
Get-AzVMImage

-resourcegroupname “SAPProduction” | Where {$_.Sku.Name -ne “

Premium_LRS
Standard_LRS
Standard_RAGRS
StandardSSD_LRS

A

Correct Answers:
Cmdlet: Get-AzDisk
Storage Type: Premium_LRS

Why They’re Correct:
Get-AzDisk:
This cmdlet retrieves all managed disks in the SAPProduction resource group, allowing a direct check of each disk’s Sku.Name (storage type). It’s the most efficient way to verify disk configurations before production, ensuring compliance with SAP performance needs (implicit in “business requirements”).
Get-AzVM could work but is less focused (VM-level), and Get-AzVMImage is irrelevant (image-level).
AZ-120 tests practical Azure administration for SAP, and Get-AzDisk aligns with disk validation tasks.
Premium_LRS:
Premium SSD (Premium_LRS) is the Azure-recommended storage type for SAP production application servers and HANA databases due to its high IOPS and low latency, critical for performance. The four 1-TB data disks and OS disk should use this type to meet SAP standards, despite the “minimize costs” goal (performance is prioritized in production).
Other options (Standard_LRS, Standard_RAGRS, StandardSSD_LRS) don’t meet SAP production requirements.
The -ne “Premium_LRS” filter ensures all disks are Premium_LRS by flagging exceptions.
Command: Get-AzDisk -resourcegroupname “SAPProduction” | Where {$_.Sku.Name -ne “Premium_LRS”}
Run this before production to confirm no disks deviate from the required type. An empty result means compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Case Study

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview

Litware, Inc. is an international manufacturing company that has 3,000 employees.

Litware has two main offices. The offices are located in Miami, FL, and Madrid, Spain.

Existing Environment

Infrastructure

Litware currently uses a third-party provider to host a datacenter in Miami and a disaster recovery datacenter in Chicago, IL.

The network contains an Active Directory domain named litware.com. Litware has two third-party applications hosted in Azure.

Litware already implemented a site-to-site VPN connection between the on-premises network and Azure.

SAP Environment

Litware currently runs the following SAP products:

  • Enhancement Pack6 for SAP ERP Central Component 6.0 (SAP ECC 6.0)
  • SAP Extended Warehouse Management (SAP EWM)
  • SAP Supply Chain Management (SAP SCM)
  • SAP NetWeaver Process Integration (PI)
  • SAP Business Warehouse (SAP BW)
  • SAP Solution Manager

All servers run on the Windows Server platform. All databases use Microsoft SQL Server. Currently, you have 20 production servers.

You have 30 non-production servers including five testing servers, five development servers, five quality assurance (QA) servers, and 15 pre-production servers.

Currently, all SAP applications are in the litware.com domain.

Problem Statements

The current version of SAP ECC has a transaction that, when run in batches overnight, takes eight hours to complete. You confirm that upgrading to SAP Business Suite on HANA will improve performance because of code changes and the SAP HANA database platform.

Litware is dissatisfied with the performance of its current hosted infrastructure vendor. Litware experienced several hardware failures and the vendor struggled to adequately support its 24/7 business operations.

Requirements

Business Goals

Litware identifies the following business goals:

  • Increase the performance of SAP ECC applications by moving to SAP HANA. All other SAP databases will remain on SQL Server.
  • Move away from the current infrastructure vendor to increase the stability and availability of the SAP services.
  • Use the new Environment, Health and Safety (EH&S) in Recipe Management function.
  • Ensure that any migration activities can be completed within a 48-hour period during a weekend.

Planned Changes

Litware identifies the following planned changes:

  • Migrate SAP to Azure.
  • Upgrade and migrate SAP ECC to SAP Business Suite on HANA Enhancement Pack 8.

Technical Requirements

Litware identifies the following technical requirements:

  • Implement automated backups.
  • Support load testing during the migration.
  • Identify opportunities to reduce costs during the migration.
  • Continue to use the litware.com domain for all SAP landscapes.
  • Ensure that all SAP applications and databases are highly available.
  • Establish an automated monitoring solution to avoid unplanned outages.
  • Remove all SAP components from the on-premises network once the migration is complete.
  • Minimize the purchase of additional SAP licenses. SAP HANA licenses were already purchased.
  • Ensure that SAP can provide technical support for all the SAP landscapes deployed to Azure.

HOTSPOT

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Answer Area

After the migration, you can use Azure Site Recovery to back up the SAP HANA databases. ( ) ( )
After the migration, you can use SAP HANA Cockpit to back up the SAP ECC databases. ( ) ( )
After the migration, you can use SAP HANA Cockpit to back up SAP BW. ( ) ( )

A

Correct Answers:
After the migration, you can use Azure Site Recovery to back up the SAP HANA databases: No
After the migration, you can use SAP HANA Cockpit to back up the SAP ECC databases: Yes
After the migration, you can use SAP HANA Cockpit to back up SAP BW: No
Why They’re Correct:
ASR for SAP HANA (No):
ASR replicates VMs for DR, not database backups. SAP HANA requires database-specific backup tools (e.g., HANA Cockpit), making ASR unsuitable. AZ-120 expects understanding of backup vs. replication distinctions.
HANA Cockpit for ECC (Yes):
Post-migration, ECC’s database is SAP HANA, and HANA Cockpit is a native, supported tool for HANA backups. This aligns with the “automated backups” requirement and AZ-120’s focus on SAP HANA management.
HANA Cockpit for BW (No):
SAP BW stays on SQL Server, and HANA Cockpit is HANA-specific. This tests knowledge of database platforms in mixed SAP environments, a key AZ-120 concept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You are evaluating which migration method Litware can implement based on the current environment and the business goals.

Which migration method will cause the least amount of downtime?

Use the Database migration Option (DMO) to migrate to SAP HANA and Azure During the same maintenance window.
Use Near-Zero Downtime (NZDT) to migrate to SAP HANA and Azure during the same maintenance window.
Migrate SAP to Azure, and then migrate SAP ECC to SAP Business Suite on HAN
Migrate SAP ECC to SAP Business Suite on HANA an the migrate SAP to Azure.

A

Correct Answer: Use Near-Zero Downtime (NZDT) to migrate to SAP HANA and Azure during the same maintenance window
Why It’s Correct:
Least Downtime:
NZDT minimizes downtime to the smallest window (e.g., 1-4 hours) by pre-replicating data to SAP HANA on Azure while the source system (SQL Server on-premises) remains operational. The final cutover is quick, far less than DMO’s single-step downtime (8-24 hours) or the cumulative downtime of two-step approaches (12-36 hours).
The question prioritizes “least amount of downtime,” and NZDT is explicitly designed for this, outperforming DMO and multi-step options.
Case Study Alignment:
Fits the 48-hour weekend window (a constraint NZDT easily meets).
Supports the goal of moving to SAP HANA and Azure, improving performance and stability.
High availability is enhanced by minimizing disruption during migration.
Comparison:
DMO: Single window but longer downtime (e.g., 8-24 hours), not “least.”
Two-Step Options (3 & 4): Double downtime across separate windows, clearly more than NZDT or DMO.
NZDT’s replication approach (e.g., via SAP LT or similar) ensures the least interruption.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You need to ensure that you can receive technical support to meet the technical requirements.

What should you deploy to Azure?

SAP Landscape Management (LaMa)
SAP Gateway
SAP Web Dispatcher
SAPRouter

A

Correct Answer: SAPRouter
Why It’s Correct:
SAPRouter is the tool specifically designed to enable SAP technical support by establishing a secure connection between Litware’s Azure-hosted SAP systems and SAP’s support infrastructure. It meets the technical requirement by allowing SAP to access and troubleshoot the landscapes (ECC on HANA, EWM, SCM, PI, BW) remotely.
SAP LaMa, SAP Gateway, and SAP Web Dispatcher serve operational, integration, or performance purposes, respectively, but none facilitate SAP support access.
For the AZ-120 exam, SAPRouter is the expected solution for ensuring SAP supportability in Azure, aligning with real-world SAP-on-Azure deployments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

HOTSPOT

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No

After the migration, all user authentication to the SAP applications must be handled by Azure Active Directory (Azure AD). ( ) ( )
The migration requires that the on-premises Active Directory domain syncs to
Azure Active Directory (Azure AD). ( ) ( )
After the migration users will be able to authenticate to the SAP applications by using their existing credentials in litware.com. ( ) ( )

A

Final Answers
Statement 1: No
Statement 2: No
Statement 3: Yes

Why These Are Correct
Statement 1 (No): The absolute phrasing (“must”) doesn’t reflect the flexibility of SAP authentication options on Azure. AZ-120 emphasizes understanding integration possibilities, not mandates.
Statement 2 (No): Migration to Azure focuses on infrastructure and application lift-and-shift or re-platforming; AD syncing is an optional identity step, not a migration requirement.
Statement 3 (Yes): This reflects a typical hybrid identity outcome in SAP-on-Azure deployments, where existing credentials are preserved via AD sync or domain extension, a key AZ-120 concept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You need to recommend a solution to reduce the cost of the SAP non-production landscapes after the migration.

What should you include in the recommendation?

Deallocate virtual machines when not In use.
Migrate the SQL Server databases to Azure SQL Data Warehouse.
Configure scaling of Azure App Service.
Deploy non-production landscapes to Azure Devlest Labs.

A

Final Recommendation and Reasoning
Deploy non-production landscapes to Azure DevTest Labs

Reason: While deallocating VMs is a valid cost-saving tactic, Azure DevTest Labs encompasses this capability (via auto-shutdown policies) and adds additional cost-saving and management features tailored to non-production environments. It provides a holistic solution for SAP non-production landscapes by enabling efficient resource provisioning, policy enforcement, and cost tracking, which are critical for managing SAP workloads on Azure. This makes it a more comprehensive and strategic recommendation compared to simply deallocating VMs. The other two options (Azure SQL Data Warehouse and Azure App Service) are not applicable to SAP landscapes, as explained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Litware is evaluating whether to add high availability after the migration?

What should you recommend to meet the technical requirements?

SAP HANA system replication and Azure Availability Sets
Azure virtual machine auto-restart with SAP HANA service auto-restart.
Azure Site Recovery

A

Recommendation:
SAP HANA System Replication and Azure Availability Sets is the correct answer.

Why It’s Correct:
Meets HA Goals: This option provides true high availability by combining SAP HANA’s data replication (ensuring database consistency and failover capability) with Azure Availability Sets (protecting against VM-level hardware failures). It minimizes downtime and ensures service continuity within a region, which aligns with typical HA requirements for SAP workloads post-migration.
Azure AZ-120 Context: The exam emphasizes understanding SAP HANA high availability options on Azure, including system replication and infrastructure features like Availability Sets or Zones. This combination is a standard recommendation for SAP HANA HA in Azure documentation and aligns with best practices for production landscapes.
Comparison to Alternatives:
VM Auto-Restart: Too basic; it doesn’t provide redundancy or data replication, failing to meet robust HA standards.
Azure Site Recovery: Focuses on DR across regions, not HA within a region, making it less suitable unless the question explicitly mentions regional failover needs (which it doesn’t).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You are evaluating the migration plan.

Licensing for which SAP product can be affected by changing the size of the virtual machines?

SAP Solution Manager
PI
SAP SCM
SAP ECC

A

The correct answer is SAP ECC.

Explanation:
When evaluating the migration of SAP workloads to Azure, as part of the AZ-120 exam context, the licensing for SAP ERP Central Component (SAP ECC) can be affected by changing the size of virtual machines. SAP ECC is a core SAP product that often relies on SAP HANA or other database systems for performance optimization in modern deployments. Licensing for SAP ECC is typically tied to the system’s performance capacity, which is measured in SAP Application Performance Standard (SAPS). The SAPS rating is directly influenced by the compute resources (e.g., CPU and memory) allocated to the virtual machines. Changing the size of the virtual machines (e.g., increasing or decreasing vCPUs or RAM) alters the SAPS capacity, which can impact the licensing costs or requirements for SAP ECC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

opic 2, Misc. Questions

HOTSPOT

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Answer Area

You must split data files and database logs between different Azure virtual disks to
increase the database read/write performance. ( ) ( )

Enabling Accelerate Networking on virtual NICs for all SAP servers will reduce
network latency between the servers. ( ) ( )

When you use SAP HANA on Azure (Large Instances), you should set the MTU on the
primary network interface to match the MTU on SAP application servers to reduce
CPU utilization and network latency. ( ) ( )

A

Final Answers
Statements Yes No
You must split data files and database logs between different Azure virtual disks to increase the database read/write performance. Yes
Enabling Accelerated Networking on virtual NICs for all SAP servers will reduce network latency between the servers. Yes
When you use SAP HANA on Azure (Large Instances), you should set the MTU on the primary network interface to match the MTU on SAP application servers to reduce CPU utilization and network latency. Yes
Reasoning Summary
Statement 1: Splitting data and logs across disks is a standard practice to boost I/O performance for SAP databases, making “Yes” correct.
Statement 2: Accelerated Networking reduces latency by optimizing network traffic, a clear benefit for SAP server communication, so “Yes” is correct.
Statement 3: Matching MTU settings between HLI and application servers minimizes fragmentation and improves efficiency, supporting “Yes” as the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

DRAG DROP

You deploy an SAP environment on Azure.

You need to grant an SAP administrator read-only access to the Azure subscription. The SAP administrator must be prevented from viewing network information.

How should you configure the role-based access control (RBAC) role definition? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Values
———————————
/read”
“Microsoft.Authorization/
/read”
“Microsoft.Compute//read”
“Microsoft.Insights/
/read”
“Microsoft.Management/managementGroups/read”
“Microsoft.Network//read”
“Microsoft.Resources/
/read”
“Microsoft.Storage/*/read”

{
“Name”: “CustomRole001”,
“IsCustom”: true,
“Description”: “”,
“Actions”: [ _______________________ ],
“NotActions”: [ _______________________ ],
“DataActions”: [],
“AssignableScopes”:
[“/subscriptions/0eaef253-d1ee-423e-a95a-418939ee14ae”]
}

A

Final Answer:
json
{
“Name”: “CustomRole001”,
“IsCustom”: true,
“Description”: “”,
“Actions”: [“/read”],
“NotActions”: [“Microsoft.Network/
/read”],
“DataActions”: [],
“AssignableScopes”: [“/subscriptions/0eaef253-d1ee-423e-a95a-418939ee14ae”]
}
Why This Is Correct:
Read-Only Access: “/read” in “Actions” provides read-only permissions across all Azure resources in the subscription, aligning with the SAP administrator’s need to monitor the SAP environment without modification rights. This is a common approach in AZ-120 scenarios for granting broad visibility.
Network Exclusion: “Microsoft.Network/
/read” in “NotActions” specifically denies access to network-related information, meeting the requirement to restrict this visibility. The “NotActions” field takes precedence over “Actions”, ensuring the exclusion is enforced.
AZ-120 Relevance: The exam tests understanding of RBAC customization for SAP workloads on Azure, including how to scope permissions and use “Actions” and “NotActions” to fine-tune access. This solution reflects best practices for creating least-privilege roles tailored to SAP administration.
Alternative Values: Granular permissions (e.g., “Microsoft.Compute//read”) could work but would require listing all relevant namespaces, which is impractical and error-prone. “/read” with a “NotActions” exclusion is more efficient and aligns with Azure’s RBAC design.
Correct Selections:
Actions: “/read”
NotActions: “Microsoft.Network/
/read”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have an Azure virtual machine that runs SUSE Linux Enterpnse Server (SlES). The virtual machine hosts a highly available deployment of SAP HANA.

You need to validate whether Accelerated Networking is operational for the virtual machine.

What should you use?

fio
iometer
netsh
ethtool

A

Why ethtool?
ethtool is a Linux command-line utility specifically designed to display and configure network interface settings. On a Linux-based system like SLES, you can use ethtool to check the status of the NIC and verify whether features like Accelerated Networking are active. For example, commands like ethtool -i <interface> or ethtool -k <interface> can provide details about the driver and offload capabilities, which are indicative of Accelerated Networking being operational. Azure documentation for validating Accelerated Networking on Linux VMs often references ethtool to confirm the use of the hv_netvsc driver and SR-IOV support.</interface></interface>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You deploy an SAP environment on Azure.

You need to monitor the performance of the SAP NetWeaver environment by using Azure Extension for SAP.

What should you do first?

A. From Azure CLI, install the Linux Diagnostic Extension
B. From the Azure portal, enable the Custom Script Extension
C. From Azure CLI, run the az vm aem set command
D. From the Azure portal, enable the Azure Network Watcher Agent

A

Correct Answer
C. From Azure CLI, run the az vm aem set command

Why Correct?
The Azure Extension for SAP (Azure Enhanced Monitoring Extension) is a prerequisite for monitoring SAP NetWeaver environments on Azure. It integrates with the SAP Host Agent to collect detailed performance metrics (e.g., system resources, SAP-specific counters) and exposes them to Azure Monitor or SAP’s own monitoring tools. The az vm aem set command is the official and recommended method to deploy and configure this extension on an Azure VM, as outlined in Microsoft’s documentation for SAP workloads on Azure. This step must be performed first before any SAP-specific monitoring can take place, making it the correct initial action for the scenario described.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
DRAG DROP Your on-premises network contains an Active Directory domain. You have an SAP environment on Azure that runs on SUSE Linux Enterprise Server (SLES) servers. You configure the SLES servers to use domain controllers as their NTP servers and their DNS servers. You need to join the SLES servers to the Active Directory domain. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions Answer Area -------------------------------------------------------------- Add realm details to /etc/krb5.conf and /etc/samba/smb.conf Shut down the following services: smbd, nmbd, and winbindd Run net ads join -U administrator Run net rpc join -U administrator Install the samba-winbind package
Final Answer (Sequence): Install the samba-winbind package Add realm details to /etc/krb5.conf and /etc/samba/smb.conf Run net ads join -U administrator Why This Is Correct: Logical Order: The sequence follows standard Linux-to-AD integration: Install tools → 2. Configure settings → 3. Join the domain. AZ-120 Relevance: The exam tests practical knowledge of integrating SAP environments (often on SLES) with Azure and on-premises AD. This process reflects real-world steps documented in Microsoft’s Azure SAP guides and SUSE’s AD integration tutorials. Requirement Fit: The actions ensure the SLES server joins the AD domain securely and functionally, leveraging the preconfigured NTP/DNS setup. Exclusion of Alternatives: Shutting down services isn’t required for an initial join and doesn’t fit the three-step constraint. net rpc join is outdated for modern AD, making net ads join the correct choice.
26
DRAG DROP You have a large and complex SAP environment on Azure. You are designing a training landscape that will be used 10 times a year. You need to recommend a solution to create the training landscape. The solution must meet the following requirements: ✑ Minimize the effort to build the training landscape. ✑ Minimize costs. In which order should you recommend the actions be performed for the first training session? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order. Actions Answer Area -------------------------------------------------------------- Build the training landscape Create a custom image by using the snapshot Deliver the training Take a snapshot of the virtual machine disks Shut down and delete the virtual machines
Correct Order of Actions: Build the training landscape Take a snapshot of the virtual machine disks Create a custom image by using the snapshot Deliver the training Shut down and delete the virtual machines Why This Order is Correct: Minimize Effort: Building the landscape once and creating a reusable image eliminates the need to manually rebuild it for each of the 10 sessions. Snapshots and custom images are Azure-native features that streamline this process. Minimize Costs: Deleting the VMs after the session ensures you’re not paying for compute resources when the landscape isn’t in use. The custom image incurs minimal storage costs compared to running VMs. AZ-120 Context: The AZ-120 exam focuses on optimizing SAP workloads on Azure, including efficient resource management. Using snapshots and custom images is a best practice for creating temporary or recurring environments like training landscapes.
27
You have a n SAP environment on Azure. Your on-premises network uses a 1-Gbps ExpresRoute circuit to connect to Azure Private peering is enabled on the circuit. The default route (0.0.0.0/0) from the on-premises network is advertised You need to resolve the issue without modifying the ExpresRoute circuit. The solution must minimize administrative effort . What should you do? Create a user-defined route tint redirects traffic to the Blob storage. Create an application security group. Change the backup solution to use a third-party software that can write to the Blob storage. Enable virtual network service endpoints.
Correct Answer Enable virtual network service endpoints Why Correct? The most likely issue in this scenario is that the default route (0.0.0.0/0) advertised via ExpressRoute is forcing SAP-related traffic (e.g., backups to Blob storage) to route through the on-premises network, leading to inefficiency or connectivity problems. Service endpoints for Azure Blob Storage resolve this by keeping traffic within Azure’s private network, bypassing the ExpressRoute path for storage access. This aligns with Microsoft’s best practices for SAP on Azure when ExpressRoute is used with a default route advertisement.
28
HOTSPOT - You are implementing a highly available deployment of SAP HANA on Azure virtual machines. You need to ensure that the deployment meets the following requirements: * Supports host auto-failover * Minimizes cost How should you configure the highly available components of the deployment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area HANA database and log volumes: [Dropdown] - NFSv3 volumes - NFSv4 volumes - Premium SSD disks I/O fencing: [Dropdown] - NFSv3 - NFSv4 - An SBD device
Final Answer: HANA database and log volumes: NFSv3 volumes I/O fencing: An SBD device Why This Is Correct: NFSv3 Volumes: Auto-Failover: Enables shared storage for SAP HANA data and logs, allowing a standby node to take over in a Pacemaker cluster, meeting the HA requirement. Cost: More economical than Premium SSDs with replication, as it avoids duplicating resources. NFSv3 via Azure Files or NetApp Files is a supported, cost-optimized option for SAP HANA. AZ-120 Fit: Reflects Azure’s recommended architecture for SAP HANA HA using shared storage, a key topic in the exam. An SBD Device: Auto-Failover: Ensures cluster integrity during failover by fencing failed nodes, a critical component of HA in SAP HANA deployments. Cost: Minimal cost (small disk or Azure agent), making it the cheapest viable fencing option. AZ-120 Fit: Matches the exam’s focus on Linux HA clustering (e.g., SLES with Pacemaker) for SAP HANA, where SBD is a standard choice.
29
You have an Azure subscription. Your company has an SAP environment that runs on SUSE Linux Enterprise Server (SLES) servers and SAP HANA. The environment has a primary site and a disaster recovery site. Disaster recovery is based on SAP HANA system replication. The SAP ERP environment is 4 TB and has a projected growth of 5% per month. The company has an uptime Service Level Agreement (SLA) of 99.99%, a maximum recovery time objective (RTO) of four hours, and a recovery point objective (RPO) of 10 minutes. You plan to migrate to Azure. You need to design an SAP landscape for the company. Which options meet the company’s requirements? A. Azure virtual machines and SLES for SAP application servers SAP HANA on Azure (Large Instances) that uses SAP HANA system replication for high availability and disaster recovery B. ASCS/ERS and SLES clustering that uses the Pacemaker fence agent SAP application servers deployed to an Azure Availability Zone SAP HANA on Azure (Large Instances) that uses SAP HANA system replication for database high availability and disaster recovery C. SAP application instances deployed to an Azure Availability Set SAP HANA on Azure (Large Instances) that uses SAP HANA system replication for database high availability and disaster recovery D. ASCS/ERS and SLES clustering that uses the Azure fence agent SAP application servers deployed to an Azure Availability Set SAP HANA on Azure (Large Instances) that uses SAP HANA system replication for database high availability and disaster recovery
Why B is Correct: 99.99% SLA: Availability Zones provide higher uptime than Availability Sets, and combining this with ASCS/ERS clustering (via Pacemaker) ensures HA for the application layer. SAP HANA Large Instances with system replication covers the database. RTO of 4 hours: Pacemaker clustering and HANA system replication enable fast failover for both application and database tiers. RPO of 10 minutes: SAP HANA system replication in synchronous mode (for HA within a region) achieves near-zero data loss, and asynchronous replication to the DR site fits the 10-minute RPO. Scalability: HANA Large Instances support a 4 TB database with room for growth. AZ-120 Context: Option B reflects Azure’s recommended architecture for SAP HANA workloads, emphasizing Availability Zones and Pacemaker for SLES-based HA.
30
DRAG DROP You have an Azure Active Directory (Azure AD) tenant and an SAP Cloud Platform Identity Authentication Service tenant. You need to ensure that users can use their Azure AD credentials to authenticate to SAP applications and services that trust the SAP Cloud Platform Identity Authentication Service tenant. In which order should you perform the actions? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order. Actions - Download the single sign-on (SSO) metadata from the Azure AD tenant. - Create and configure an enterprise application in the Azure AD tenant. - Upload the SAP Cloud Platform Identity Authentication Service tenant metadata to Azure AD tenant. - Download the SAP Cloud Platform Identity Authentication Service tenant metadata. - Create and configure a corporate identity provider in the SAP Cloud Platform Identity Authentication Service tenant. Answer Area
Answer Area 1. Create and configure an enterprise application in the Azure AD tenant. 2. Download the single sign-on (SSO) metadata from the Azure AD tenant. 3. Download the SAP Cloud Platform Identity Authentication Service tenant metadata. 4. Upload the SAP Cloud Platform Identity Authentication Service tenant metadata to Azure AD tenant. 5. Create and configure a corporate identity provider in the SAP Cloud Platform Identity Authentication Service tenant. Explanation of the Correct Order Create and configure an enterprise application in the Azure AD tenant. Why: This is the first step because you need to set up an application in Azure AD to represent the SAP Cloud Platform Identity Authentication Service. This involves adding the SAP application from the Azure AD gallery (e.g., "SAP Cloud Platform Identity Authentication" or "SAP Cloud Identity Services") and configuring basic SSO settings. Without this step, there’s no entity in Azure AD to associate with the SAP tenant. Details: In the Azure portal, you navigate to "Enterprise Applications," add a new application, and select the appropriate SAP app from the gallery. You then configure it for SAML-based SSO. Download the single sign-on (SSO) metadata from the Azure AD tenant. Why: After creating the enterprise application, you need to obtain its SAML metadata (e.g., Federation Metadata XML) from Azure AD. This metadata contains critical information like the SSO URL, entity ID, and signing certificate, which the SAP tenant needs to trust Azure AD as the IdP. Details: In the Azure AD portal, under the SSO configuration for the enterprise application, you download the metadata file. This step must follow the creation of the application because the metadata is generated based on the app’s configuration. Download the SAP Cloud Platform Identity Authentication Service tenant metadata. Why: You need the SAP tenant’s metadata to configure Azure AD to recognize it as a trusted service provider. This metadata includes the SAP tenant’s entity ID, reply URL (Assertion Consumer Service URL), and other SAML details. This step can technically occur at any point before uploading it to Azure AD, but it makes sense to do it after preparing the Azure AD side to keep the workflow streamlined. Details: In the SAP Cloud Platform Identity Authentication Service admin console, under "SAML 2.0 Configuration" or similar, you download the tenant’s metadata file. Upload the SAP Cloud Platform Identity Authentication Service tenant metadata to Azure AD tenant. Why: Uploading the SAP metadata to Azure AD completes the trust configuration on the Azure AD side. This step allows Azure AD to know where to send SAML assertions (e.g., the reply URL) and ensures the SAML configuration aligns with the SAP tenant’s expectations. Details: In the Azure AD enterprise application’s SAML configuration, you upload the SAP metadata file, which auto-populates fields like the Identifier (Entity ID) and Reply URL. This step requires the SAP metadata, so it must follow step 3. Create and configure a corporate identity provider in the SAP Cloud Platform Identity Authentication Service tenant. Why: Finally, you configure the SAP tenant to trust Azure AD as a corporate identity provider. This involves uploading the Azure AD metadata (from step 2) to the SAP tenant and setting Azure AD as the authenticating IdP. This is the last step because it relies on having the Azure AD metadata available and the Azure AD side fully prepared. Details: In the SAP admin console, under "Identity Providers" > "Corporate Identity Providers," you create a new IdP, upload the Azure AD metadata, and configure settings like the subject name identifier (e.g., email or UPN).
31
You have an SAP production landscape on-premises and an SAP development landscape on Azure. You deploy a network virtual appliance to act as a firewall between the Azure subnet and the on-premises network. Solution: You configure route filters for Microsoft peering. Does this meet the goal? Yes No
The correct answer is No. Explanation: The goal is to ensure that all traffic between the Azure subnet (hosting the SAP development landscape) and the on-premises network (hosting the SAP production landscape) is routed through the network virtual appliance (NVA) acting as a firewall. Let’s break this down: Route Filters for Microsoft Peering: Route filters are used in the context of Azure ExpressRoute to control which routes are advertised or accepted over Microsoft peering. Microsoft peering is typically used for accessing public Azure services (like Azure Blob Storage or Microsoft 365) rather than for routing traffic between an Azure virtual network (VNet) and an on-premises network through a custom firewall like an NVA. Route filters do not provide a mechanism to force traffic through an NVA; they are more about filtering BGP route advertisements. What’s Needed: To route all traffic through the NVA, you need to configure user-defined routes (UDRs) in the Azure subnet’s route table. A UDR can specify the NVA as the next hop for traffic destined to the on-premises network (and vice versa, if applicable). This ensures that the NVA, acting as a firewall, inspects and processes all traffic between the two environments. Why "No" is Correct: Configuring route filters for Microsoft peering does not address the requirement of routing traffic through the NVA. It’s unrelated to the task of directing traffic between an Azure subnet and an on-premises network via a custom appliance. Instead, UDRs combined with proper VNet configuration (e.g., VNet peering or ExpressRoute private peering) would be the appropriate solution.
32
You plan to deploy a high availability SAP environment that will use a failover clustering solution. You have an Azure Resource Manager template that you will use for the deployment. You have the following relevant portion of the template. { "apiVersion": "2017-08-01", "type": "Microsoft.Network/loadBalancers", "name": "load_balancer1", "location": "region", "sku": { "name": "Standard" }, "properties": { "frontendIPConfigurations": [ { "name": "frontend1", "zones": ["1"], "properties": { "subnet": { "Id": "[variabales('subnetRef')]" }, "privateIPAddress": "10.0.0.6", "privateIPAllocationMethod": "Static" } } ] } } What is created by the template? a zonal frontend IP address for the internal Azure Standard Load Balancer a zone-redundant frontend IP address for the internal Azure Basic Load Balancer a zone -redundant public IP address for the internal load balancer a zone-redundant frontend IP address for the internal Azure Standard Load Balancer
Correct Answer a zonal frontend IP address for the internal Azure Standard Load Balancer Why is it Correct? Internal: The presence of a private IP (10.0.0.6) and subnet reference indicates an internal Load Balancer. Standard: The "sku": "Standard" confirms it’s a Standard Load Balancer. Zonal: The "zones": ["1"] limits the frontend IP to Zone 1, making it zonal rather than zone-redundant (which would span multiple zones or omit the zones property for redundancy across all zones).
33
HOTSPOT You have an on-premises deployment of SAP Business Suite on HANA that includes a CPU-intensive application tier and a 20-TB database tier. You plan to migrate to SAP HANA on Azure. You need to recommend a compute option to host the application and database tiers. The solution must minimize cost. What should you recommend for each tier? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Application: ▼ Ev3-series of Azure virtual machines HANA on Azure (Large Instances) M-series of Azure virtual machines Database: ▼ Ev3-series of Azure virtual machines HANA on Azure (Large Instances) M-series of Azure virtual machines
Final Answer Application Tier: M-series of Azure virtual machines Database Tier: HANA on Azure (Large Instances) Application Tier: M-series of Azure virtual machines Why Correct: CPU-Intensive Workload: The application tier is described as CPU-intensive, requiring strong compute performance. The M-series VMs (Memory-optimized) are designed for high-performance workloads, offering a balance of CPU power and memory, which is ideal for SAP application servers (e.g., SAP NetWeaver components like ABAP or Java stacks). Cost Minimization: Compared to HANA on Azure (Large Instances), M-series VMs are significantly cheaper because they are general-purpose VMs rather than dedicated bare-metal servers. For the application tier, which doesn’t require the specialized hardware of SAP HANA in-memory databases, M-series provides sufficient performance at a lower cost. SAP Certification: M-series VMs are SAP-certified for running SAP application workloads, ensuring compatibility with SAP Business Suite on HANA. Scalability: M-series VMs can be scaled up or out as needed, providing flexibility to adjust resources and control costs. Database Tier: HANA on Azure (Large Instances) Why Correct: 20-TB Database Size: SAP HANA is an in-memory database, and a 20-TB database requires significant memory and compute resources. HANA on Azure (Large Instances) provides dedicated bare-metal servers optimized for SAP HANA, with configurations supporting up to 24 TB of memory (e.g., S960m SKU with 20 TB memory capacity). This meets the 20-TB requirement efficiently. SAP HANA Certification: HANA Large Instances are purpose-built and certified by SAP for running SAP HANA databases, ensuring performance and compliance with SAP’s strict requirements (e.g., Tailored Datacenter Integration, or TDI, standards). Cost Minimization within Constraints: While HANA Large Instances are more expensive than VMs, they are the only viable option for a 20-TB SAP HANA database due to memory and performance needs. Azure offers no VM series with sufficient memory to handle a 20-TB HANA database in a single instance, and splitting across multiple VMs isn’t supported for such large HANA deployments. Thus, this is the most cost-effective option that meets the technical requirements. Performance: Large Instances provide low-latency storage and high-speed networking (e.g., proximity to Azure VMs for the application tier), critical for a 20-TB database.
34
HOTSPOT You have an SAP environment that contains the following components: * Enhancement Package 6 for SAP ERP Central Component 6.0 (SAP ECC 6.0) * Servers that runs SUSE Linux Enterprise Server 12 (SIES 12) * Databases on IBM D82 10.5 * SAP Solution Manager 7.1 You plan to migrate the SAP environment to Azure. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements Yes No The version of SAP Solution Manager supports deployment to Azure. ( ) ( ) The version of SAP ECC supports deployment to Azure. ( ) ( ) The DB2 databases must be migrated to a different database platform before migrating to Azure. ( ) ( )
Answer Area Statements Yes No The version of SAP Solution Manager supports deployment to Azure. (Yes) ( ) The version of SAP ECC supports deployment to Azure. (Yes) ( ) The DB2 databases must be migrated to a different database platform before migrating to Azure. ( ) (No) Explanation: The version of SAP Solution Manager supports deployment to Azure. (Yes) SAP Solution Manager 7.1 is an older version (released around 2011), and while SAP has encouraged upgrades to 7.2 for full Azure support, 7.1 can still technically be deployed to Azure on supported operating systems like SUSE Linux Enterprise Server (SLES) 12. Azure supports running SAP workloads on virtual machines (VMs) with compatible OS and database combinations. Microsoft and SAP documentation indicate that Solution Manager 7.1 is not explicitly unsupported, though 7.2 is preferred for modern features and certifications. For the AZ-120 exam, the focus is on feasibility, and since no explicit restriction exists, "Yes" is the closest correct answer. The version of SAP ECC supports deployment to Azure. (Yes) Enhancement Package 6 for SAP ERP Central Component 6.0 (SAP ECC 6.0 EHP6) is a supported SAP application for migration to Azure. Microsoft and SAP have certified various ECC versions, including EHP6, for Azure deployment on compatible infrastructure (e.g., Azure VMs running SLES 12). The AZ-120 exam tests knowledge of SAP workload compatibility, and ECC 6.0 EHP6 is well within the supported scope when paired with a supported OS and database, making "Yes" the correct choice. The DB2 databases must be migrated to a different database platform before migrating to Azure. (No) IBM DB2 10.5 is a supported database for SAP workloads on Azure. Microsoft Azure offers specific VM types (e.g., M-series) certified for running SAP applications with IBM DB2 LUW (Linux, UNIX, Windows), including version 10.5. The SAP Note 1928533 (Azure-supported databases) and Microsoft documentation confirm that DB2 is a valid option, and no mandatory migration to another database platform (e.g., SAP HANA or SQL Server) is required for Azure deployment. Thus, "No" is correct, as migration to a different database is not a prerequisite.
35
You are planning high availability for an SAP environment on Azure. The SAP environment will use datacenters in to different zones. Testing shows that the latency between the two zones supports synchronous DBMS replication. You need to design a solution to ensure that SAP services are available if an Azure datacenter within a zone fails. The solution must meet the following requirements: * Provide automatic failover * Minimize costs Which high availability configuration meet the requirements? Azure Availability Zones with an active/passive deployment Azure Site Recovery Azure Availability Sets with active/passive clustering Azure Availability Sets with active/active clustering
Correct Answer Azure Availability Zones with an active/passive deployment Why is it Correct? Zone Support: The scenario specifies two zones, and Availability Zones allow deployment across them, ensuring HA if a datacenter in one zone fails. Availability Sets, by contrast, are single-datacenter constructs and don’t meet this need. Automatic Failover: An active/passive setup with clustering (e.g., WSFC or Pacemaker) and an Azure Standard Load Balancer provides automatic failover. The Load Balancer’s health probes detect the active node’s failure and redirect traffic to the passive node in the other zone. Minimize Costs: The passive node remains idle until failover, avoiding the cost of running both nodes actively (unlike active/active). Compared to ASR, it avoids cross-region replication costs, focusing only on zone-level HA. Synchronous Replication: Low latency between zones supports real-time DBMS replication, ensuring data consistency during failover, which is critical for SAP HA (e.g., HANA System Replication in synchronous mode).
36
You plan to deploy an SAP environment on Azure. The SAP environment will have landscapes for production, development. and quality assurance. You need to minimize the costs associated with running the development and quality assurance landscapes on Azure. What should you do? Create Azure Automation runbooks to stop, deallocate, and start Azure virtual machines. Create a scheduled task that runs the stopsap command. Configure scaling for Azure App Service. Configure Azure virtual machine scales sets.
Final Answer Create Azure Automation runbooks to stop, deallocate, and start Azure virtual machines. Explanation of Why This is Correct Cost Minimization Strategy: Development and QA landscapes are typically not required to run 24/7, unlike production environments. Stopping and deallocating Azure virtual machines (VMs) when they are not in use (e.g., outside business hours or during weekends) eliminates compute costs during those periods, as Azure only charges for VMs when they are allocated and running. Azure Automation runbooks allow you to automate the process of stopping, deallocating, and starting VMs on a schedule or based on triggers, ensuring consistent cost savings without manual intervention. Relevance to SAP Environment: SAP landscapes (e.g., SAP HANA databases, SAP NetWeaver application servers) often run on Azure VMs (e.g., M-series for application tiers, or HANA Large Instances for databases). Automation runbooks can target these VMs, making this approach directly applicable to the development and QA landscapes. For SAP HANA on Azure Large Instances, while stopping isn’t always an option due to their dedicated nature, the question implies a focus on VM-based landscapes (common for dev/QA), where this solution fits perfectly. Azure Feature Utilization: Azure Automation is a native service designed for automating repetitive tasks like VM management. It supports PowerShell or Python runbooks, which can use Azure CLI or PowerShell cmdlets (e.g., Stop-AzVM, Start-AzVM) to manage VM states. This aligns with AZ-120 objectives, which include leveraging Azure tools for cost optimization and operational efficiency in SAP deployments. Practical Implementation: You can create a runbook to stop and deallocate VMs at, say, 6 PM and start them at 8 AM, using a schedule linked to the Automation account. This ensures dev/QA environments are available when needed while minimizing costs during idle times.
37
You have an SAP ERP Central Component (SAP ECQ) environment on Azure. You need to add an additional SAP application server to meet the following requirements: * Provide the highest availability. * Provide the fastest speed between the new server and the database. What should you do? Place the new server in a different Azure Availability Zone than the database. Place the new server in the same Azure Availability Set a? the database and the other application servers. Place the new server in the same Azure Availability Zone as the database and the other application servers.
The correct answer is: Place the new server in the same Azure Availability Zone as the database and the other application servers. Why This is Correct for AZ-120: SAP ECC Performance: SAP systems are latency-sensitive, especially between application servers and the database. Co-locating them in the same zone ensures optimal performance, a key consideration in AZ-120. Availability: The "highest availability" requirement is satisfied within the scope of a single zone, as the question doesn’t mandate cross-zone redundancy (which would conflict with speed). Practicality: Option 2 (Availability Set) is less realistic for SAP tiered architecture, while Option 1 (different zone) sacrifices speed unnecessarily.
38
HOTSPOT You have an SAP development landscape on Azure. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements Yes No You can use SAP Landscape Management (LaMa) to automate stopping, starting, and deallocating SAP virtual machines. ( ) ( ) You can use SAP Solution Manager to automate stopping, starting, and deallocating SAP virtual machines. ( ) ( ) You can use SAP HANA Cockpit to automate stopping, starting, and deallocating SAP virtual machines. ( ) ( )
Final Answers Statements Yes No You can use SAP Landscape Management (LaMa) to automate stopping, starting, and deallocating SAP virtual machines. (Yes ) You can use SAP Solution Manager to automate stopping, starting, and deallocating SAP virtual machines. (No) You can use SAP HANA Cockpit to automate stopping, starting, and deallocating SAP virtual machines. (No) Statement 1: "You can use SAP Landscape Management (LaMa) to automate stopping, starting, and deallocating SAP virtual machines." Analysis: SAP Landscape Management (LaMa) is a tool designed to manage and automate SAP system operations, including starting, stopping, and relocating SAP systems across physical or virtual environments. On Azure, LaMa integrates with Azure APIs to control VMs, allowing automation of tasks like starting and stopping VMs. It can also deallocate VMs (i.e., stop and release resources to reduce costs), especially in development landscapes where cost optimization is key. This is a common use case for LaMa in Azure SAP deployments, as it supports lifecycle management of SAP systems. Answer: Yes Why Correct: LaMa is explicitly built for automating SAP system operations, including VM start/stop/deallocate actions, and is supported in Azure environments. Statement 2: "You can use SAP Solution Manager to automate stopping, starting, and deallocating SAP virtual machines." Analysis: SAP Solution Manager (SolMan) is a centralized management tool for SAP environments, focusing on monitoring, diagnostics, change management, and system administration. While it provides extensive monitoring and alerting capabilities (e.g., for SAP system health), it is not primarily designed to automate infrastructure-level tasks like starting, stopping, or deallocating VMs. These actions are typically handled by tools like LaMa or Azure-native automation (e.g., Azure Automation, PowerShell). SolMan can integrate with other tools or scripts to trigger such actions indirectly, but it doesn’t natively perform VM lifecycle management in the way LaMa does. Answer: No Why Correct: SolMan’s focus is on SAP application-layer management, not direct VM automation, making this statement false in the context of Azure. Statement 3: "You can use SAP HANA Cockpit to automate stopping, starting, and deallocating SAP virtual machines." Analysis: SAP HANA Cockpit is a web-based administration tool for managing SAP HANA databases. It provides capabilities like monitoring performance, managing users, and configuring database settings. It operates at the database level and does not have functionality to control the underlying infrastructure (e.g., Azure VMs). Tasks like stopping, starting, or deallocating VMs are outside its scope, which is limited to HANA-specific administration. VM management would require a separate tool like LaMa or Azure-specific solutions. Answer: No Why Correct: HANA Cockpit is a database management tool, not an infrastructure automation tool, so it cannot automate VM operations.
39
DRAG DROP You have an Azure subscription. You plan to deploy a SAP NetWeaver landscape that will use SQL Server on Azure virtual machines. The solution must meet the following requirements: ✑ The SAP application and database tiers must reside in the same Azure zone. ✑ The application tier in the Azure virtual machines must belong to the same Availability Set. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Actions Answer Area Create a host group Create a proximity placement group Create an Availability Set Deploy the application tier in the Azure virtual machines Deploy SQL Server on Azure virtual machines
Final Answer Create a proximity placement group Create an Availability Set Deploy the application tier in the Azure virtual machines Deploy SQL Server on Azure virtual machines Why This Order is Correct Requirement Fulfillment: Same Azure Zone: The PPG ensures both tiers are in the same zone (e.g., Zone 1), satisfying the first requirement. Availability Set for Application Tier: The Availability Set is created and used only for the application tier VMs, satisfying the second requirement. Logical Sequence: You must create the PPG and Availability Set before deploying VMs, as these are prerequisites for VM placement and configuration. Deploying the application tier before the database tier is a common practice in SAP deployments, but the reverse order (database then application) could also work, making either sequence valid as noted in the question.
40
HOTSPOT You have an Azure alert rule and action group as shown in the following exhibit. PS Azure:\> Get-AzMetricAlertRuleV2 | Select WindowSize, EvaluationFrequency, Actions -ExpandProperty Criteria WindowSize : 00:05:00 EvaluationFrequency : 00:01:00 Actions : /subscriptions/6dce0667-3096-4f0b-bcc4-1ea4da2de0dc/resourcegroups/resourcegroup1/providers/microsoft.insights/actiongroups/admins Name : Metric MetricName : Percentage CPU MetricNamespace : Microsoft.Compute/virtualMachines OperatorProperty : GreaterThan TimeAggregation : Average Threshold : 85 Dimensions : {} AdditionalProperties : {} PS Azure:\> Get-AzActionGroup | Select -ExcludeProperty ResourceGroupName, Tags, Location GroupShortName : admins Enabled : True EmailReceivers : {admins-emailAction} SmsReceivers : {} WebhookReceivers : {} Id : /subscriptions/6dce0667-3096-4f0b-bcc4-1ea4da2de0dc/resourcegroups/resourcegroup1/providers/microsoft.insights/actiongroups/admins Name : admins Type : Microsoft.Insights/actiongroups GroupShortName : restartVM Enabled : True EmailReceivers : {} SmsReceivers : {} WebhookReceivers : {} Id : /subscriptions/6dce0667-3096-4f0b-bcc4-1ea4da2de0dc/resourcegroups/resourcegroup1/providers/microsoft.insights/actiongroups/restartVM Name : restartVM Type : Microsoft.Insights/actiongroups Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Answer Area The admins action group will be notified if the average CPU usage rises above 85% for One minute Five minutes One second The [answer choice] when the alert is triggered Admins action group will be emailed RestartVM action group will be emailed Virtual machines will restart
Final Answers: The admins action group will be notified if the average CPU usage rises above 85% for: Five minutes The [answer choice] when the alert is triggered: Admins action group will be emailed Correct Answer: Five minutes Explanation: The WindowSize (00:05:00) defines the time period over which the metric (Percentage CPU) is aggregated and evaluated. In this case, it’s 5 minutes. The TimeAggregation is set to "Average," meaning the alert checks the average CPU usage over that 5-minute window. The EvaluationFrequency (00:01:00, or 1 minute) indicates how often the condition is checked (every minute), but the condition itself is based on the 5-minute average exceeding 85%. Therefore, the alert triggers if the average CPU usage over a 5-minute period exceeds 85%, not 1 minute or 1 second. Five minutes is correct. Correct Answer: Admins action group will be emailed Explanation: The alert rule’s Actions property points to the "admins" action group (/subscriptions/.../actiongroups/admins). The "admins" action group configuration shows it has EmailReceivers ("admins-emailAction"), meaning it sends email notifications when triggered. It has no SMS or webhook receivers. The "restartVM" action group exists but is not linked to this alert rule (no reference in the Actions property). Its webhook receivers are also empty, and there’s no explicit indication of an automated VM restart action (e.g., via Azure Automation or a Logic App). Thus, when the alert triggers (CPU > 85% over 5 minutes), only the "admins" action group is notified, and its action is to send an email. Admins action group will be emailed is correct.
41
CORRECT TEXT You are planning an SAP NetWeaver deployment on Azure. The database her will consist of Two Azure virtual machines that have Microsoft SQL Server 2017 installed. Each virtual machine will be deployed to a separate availability zone. You need to perform the following: * Minimize network latency between the virtual machines. * Measure network latency between the virtual machines. What should you do? To answer, select the appropriate options in the answer area. Answer Area To minimize latency: [Dropdown] Add a network adapter to each virtual machine. Disable receive side scaling (RSS). Enable Accelerated Networking. To measure latency, use: [Dropdown] Next hop in Azure Network Watcher Niping Ping The Azure reachability report in Azure Network Watcher
Final Answers Answer Area Selection To minimize latency: Enable Accelerated Networking To measure latency, use: Niping Additional Context for AZ-120 Accelerated Networking: Supported on most Azure VM sizes used for SAP (e.g., E-series, M-series), and it’s a key configuration for HA setups like SQL Server Always On across zones. Niping: Frequently used in SAP planning to validate latency for synchronous replication (e.g., SQL Server mirroring or log shipping), ensuring it meets SAP’s stringent requirements (typically <1-2 ms within a region). The combination ensures both optimization and verification, critical for SAP NetWeaver’s database tier performance on Azure.
42
You have an on-premises SAP NetWeaver development landscape that contains the resources shown in the following table. Name Description SAPDB1 Hyper-V virtual machine that runs Microsoft SQL Server 2017 and contains a 30-TB database SAPSRV1 Hyper-V virtual machine that runs Windows Server You have a 500-Mbps ExpressRoute circuit between the on-premises environment and a virtual network. You plan to migrate the landscape to Azure. What should you include in the solution? Azure Site Recovery Microsoft System Center 2019 - Data Protection Manager (DPM 2019) Azure Data Box Azure Backup Server
Correct Answer Azure Data Box Explanation of Why Azure Data Box Is Correct The goal is to migrate the SAP NetWeaver development landscape, including a 30-TB database, from an on-premises Hyper-V environment to Azure. Here’s why Azure Data Box is the most appropriate solution: Large Data Volume (30 TB): The SAPDB1 VM hosts a 30-TB database. Transferring this amount of data over a 500-Mbps ExpressRoute circuit would take a significant amount of time. For reference: 500 Mbps = 62.5 MB/s (megabytes per second). 30 TB = 30,720 GB. Time to transfer = 30,720 GB ÷ 62.5 MB/s ≈ 491,520 seconds ≈ 136.5 hours (over 5.5 days), assuming no interruptions or bandwidth contention. This duration is impractical for a migration, especially considering potential network latency, throttling, or downtime requirements. Azure Data Box, a physical device shipped to your location, allows you to transfer large datasets offline, significantly reducing migration time. Offline Data Transfer: Azure Data Box is designed for scenarios involving large-scale data migration to Azure. You copy the data (e.g., the 30-TB database and VM files) onto the device, ship it to an Azure data center, and Microsoft uploads the data into your Azure storage account. This is ideal for the initial bulk transfer of the SAP database and VM disks. Support for SAP Migration: For SAP workloads, Microsoft recommends using Azure Data Box for the initial data seeding when dealing with large databases, especially in hybrid scenarios with ExpressRoute. Once the bulk data is in Azure, you can use the ExpressRoute circuit for incremental updates or final synchronization, but the initial 30-TB transfer is best handled offline. Hyper-V Compatibility: The on-premises environment uses Hyper-V VMs. Azure Data Box supports exporting VM disk files (e.g., VHDs) from Hyper-V, which can then be uploaded to Azure Blob Storage and converted into Azure managed disks for VM creation in Azure.
43
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Statements Yes No SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/data volume. ( ) ( ) SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/log volume. ( ) ( ) To enable Write Accelerator, you must use Azure Premium managed disks. ( ) ( )
Final Answers: SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/data volume: No SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/log volume: Yes To enable Write Accelerator, you must use Azure Premium managed disks: Yes Statement 1: SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/data volume. Answer: No Explanation: SAP HANA certification for M-Series Azure virtual machines does not require Write Accelerator to be enabled on the /hana/data volume. Write Accelerator is specifically designed to improve write latency for the transaction log, which resides in the /hana/log volume. The /hana/data volume, which stores the actual data files, can be configured with Azure Premium managed disks or Ultra disks without requiring Write Accelerator for certification. This is supported by Microsoft documentation, which specifies that Write Accelerator is mandatory only for the /hana/log volume in production scenarios on M-Series VMs to meet SAP HANA’s low-latency requirements for redo log writes. Statement 2: SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/log volume. Answer: Yes Explanation: This statement is true. For SAP HANA certification on M-Series Azure virtual machines in production scenarios, Write Accelerator must be enabled on the /hana/log volume when using Azure Premium managed disks. The /hana/log volume contains the transaction logs (redo logs), and SAP HANA certification KPIs require low write latency for these logs. Write Accelerator, exclusive to M-Series and Mv2-Series VMs with Premium managed disks, ensures that write operations meet these performance criteria. Microsoft documentation explicitly states that enabling Write Accelerator for the /hana/log volume is mandatory for certified production deployments on M-Series VMs. Statement 3: To enable Write Accelerator, you must use Azure Premium managed disks. Answer: Yes Explanation: This statement is correct. Write Accelerator is a feature available only for Azure Premium managed disks (not Standard HDD, Standard SSD, or Ultra disks) and is supported exclusively on M-Series and Mv2-Series virtual machines. It enhances write performance by caching write operations, and Microsoft’s technical documentation confirms that enabling Write Accelerator requires the use of Premium managed disks. This aligns with the AZ-120 exam’s focus on understanding storage configurations and performance optimization for SAP workloads on Azure.
44
HOTSPOT You are planning an SAP NetWeaver deployment on Azure. The database tier will consist of two Azure virtual machines that have Microsoft SQL Server 2017 installed. Each virtual machine will be deployed to a separate availability zone. You need to perform the following: * Minimize network latency between the virtual machines. * Measure network latency between the virtual machines. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area To minimize latency: Enable Accelerated Networking. Add a network adapter to each virtual machine. Disable receive side scaling (RSS). To measure latency, use: Niping Next hop in Azure Network Watcher Ping The Azure reachability report in Azure Network Watcher
Final Answer To minimize latency: Enable Accelerated Networking. To measure latency, use: NipING.
45
DRAG DROP You plan to deploy SAP on Azure. The deployment must meet the following requirements: * Support failover to another Azure region in the event of a regional outage. * Minimize data loss during a failover. * Minimize costs. Which fault tolerance technology should you choose for the SAP Web Dispatcher and the Microsoft SQL Server 2017 servers to meet the requirements? To answer, drag the appropriate technologies to the correct targets. Each technology may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Technologies Availability Sets Azure Site Recovery Native replication Rsync Answer Area SAP Web Dispatcher: [__________] SQL Server 2017 servers: [__________]
Final Answer SAP Web Dispatcher: Azure Site Recovery SQL Server 2017 servers: Native replication Why This Is the Closest to Correct for AZ-120 For the AZ-120 exam (Microsoft Azure for SAP Workloads), the focus is on understanding Azure services and SAP-specific architectures. The combination of Azure Site Recovery for the SAP Web Dispatcher and Native replication (via SQL Server Always On AGs) aligns with Microsoft’s recommended practices for SAP on Azure: ASR is a standard DR solution for VM-based SAP components like the Web Dispatcher, providing regional failover with minimal cost until failover is triggered. Native replication (Always On AGs) is the preferred method for SQL Server in SAP landscapes, as it’s optimized for database consistency, low RPO, and cost efficiency, leveraging existing SQL Server licensing. This solution balances the requirements effectively: Regional failover: Both ASR and Always On AGs support cross-region DR. Minimized data loss: Both provide low RPO. Minimized costs: ASR avoids active VMs in the DR region for the Web Dispatcher, and native replication uses SQL Server’s built-in features without additional Azure service costs beyond VM provisioning.
46
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements The Azure Extension for SAP stores performance data in an Azure Storage account. ( ) Yes ( ) No You can enable the Azure Extension for SAP on a SUSE Linux Enterprise Server 12 (SLES 12) server by running the Set-AzVMAEMExtension cmdlet. ( ) Yes ( ) No You can enable the Azure Extension for SAP on a server that runs Windows Server 2016 by running the Set-AzVMAEMExtension cmdlet. ( ) Yes ( ) No
Final Answers: The Azure Extension for SAP stores performance data in an Azure Storage account: Yes You can enable the Azure Extension for SAP on a SUSE Linux Enterprise Server 12 (SLES 12) server by running the Set-AzVMAEMExtension cmdlet: Yes You can enable the Azure Extension for SAP on a server that runs Windows Server 2016 by running the Set-AzVMAEMExtension cmdlet: Yes Statement 1: The Azure Extension for SAP stores performance data in an Azure Storage account. Answer: Yes Explanation: The Azure Extension for SAP, specifically the Azure Enhanced Monitoring (AEM) Extension, collects performance data from the virtual machine and makes it available for SAP systems. This extension builds on the Azure Diagnostics Extension, which stores its collected performance metrics and diagnostic data in an Azure Storage account specified by the user. Microsoft documentation confirms that the performance data collected by the AEM Extension is stored in an Azure Storage account, making this statement true. This aligns with the AZ-120 exam’s focus on monitoring SAP workloads on Azure. Statement 2: You can enable the Azure Extension for SAP on a SUSE Linux Enterprise Server 12 (SLES 12) server by running the Set-AzVMAEMExtension cmdlet. Answer: Yes Explanation: The Set-AzVMAEMExtension cmdlet is a PowerShell command used to configure and enable the Azure Enhanced Monitoring (AEM) Extension on a virtual machine for SAP workloads. SUSE Linux Enterprise Server 12 (SLES 12) is a supported operating system for SAP deployments on Azure, as noted in SAP Note 1984787 and Microsoft documentation. The cmdlet supports both Windows and Linux operating systems (via the -OSType parameter, which can be set to "Linux"), and SLES 12 is explicitly compatible for SAP solutions. Running Set-AzVMAEMExtension on a VM running SLES 12 will enable the extension, making this statement true. This is a key concept in the AZ-120 exam for configuring monitoring for SAP systems. Statement 3: You can enable the Azure Extension for SAP on a server that runs Windows Server 2016 by running the Set-AzVMAEMExtension cmdlet. Answer: Yes Explanation: Similarly, the Set-AzVMAEMExtension cmdlet is used to enable the Azure Enhanced Monitoring (AEM) Extension on virtual machines running supported operating systems, including Windows Server 2016. Windows Server 2016 is a supported OS for SAP deployments on Azure (per SAP Note 1928533), and the AEM Extension is compatible with it. The cmdlet allows specification of the OS type (e.g., -OSType Windows), and Microsoft documentation confirms that it can be used to configure the extension on Windows-based VMs for SAP monitoring. This makes the statement true and is directly relevant to the AZ-120 exam’s coverage of SAP workload administration on Azure.
47
You have an SAP production landscape on Azure that contains the virtual machines shown in the following table. Name Subnet Network security group (NSG) Route table VM1 Subnet1 VM1-NSG None VM2 Subnet1 VM2-NSG None VM1 cannot connect to an employee self-service application hosted on VM2. You need to identify what is causing the issue. Which two options in Azure Network Watcher should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point Connection troubeshoot Connection monitor IP flow verify Network Performance Monitor
Final Answer Which two options in Azure Network Watcher should you use? Connection troubleshoot IP flow verify Why These Two Together? Complementary Roles: IP Flow Verify identifies if NSG rules are blocking traffic (e.g., “Rule X in VM2-NSG denies TCP 443”), focusing on security configuration. Connection Troubleshoot tests the end-to-end connection and provides a broader diagnostic (e.g., “Connection failed due to NSG block at VM2”), potentially catching additional issues like misconfigured application ports or VM-level firewalls. Complete Solution: Together, they cover both security (NSG) and connectivity diagnostics, ensuring you can pinpoint whether the issue is a blocked port, a misapplied rule, or another network factor—all common culprits in SAP landscapes on Azure.
48
You deploy an SAP environment on Azure by following the SAP workload on Azure planning and deployment checklist. You need to verify whether Azure Diagnostics is enabled. Which cmdlet should you run? Get-AzureVMAvailableExtension Get-AzVmDiagnosticsExtension Test-AzDeployment Test-VMConfigForSAP
Final Answer Get-AzVmDiagnosticsExtension Why Get-AzVmDiagnosticsExtension Is the Closest to Correct for AZ-120 For the AZ-120 exam, you need to demonstrate familiarity with Azure tools and PowerShell cmdlets specific to managing SAP workloads. The SAP workload on Azure planning and deployment checklist (a key reference for the exam) includes steps for enabling and validating monitoring capabilities, such as Azure Diagnostics. The Get-AzVmDiagnosticsExtension cmdlet is the precise tool to: Confirm the presence and configuration of the diagnostics extension on the VMs hosting the SAP environment. Align with modern Azure management practices (ARM and Az module). Address the question’s intent of verification post-deployment. Usage Example (Conceptual) To verify diagnostics for an SAP VM, you’d run something like: powershell Get-AzVmDiagnosticsExtension -ResourceGroupName "SAPResourceGroup" -VMName "SAPVM" This returns details about the diagnostics extension, including whether it’s enabled, allowing you to confirm compliance with the checklist.
49
You have an on- premises SAP environment hosted on VMware VSphere that in Microsoft SQL Server as the database platform. You plan to migrate the environment to Azure. The database platform will remain the same. You need gather information to size the target Azure Environment for the migration. What should you use? Azure Monitor the SAP NANA sizing report the SAP Early Watch Alert report Azure Advisor
Correct Answer: The SAP EarlyWatch Alert report Why This Answer Is Correct for AZ-120: For the AZ-120 exam, sizing an Azure environment for an SAP migration involves understanding the current workload and resource utilization of the on-premises system. The SAP EarlyWatch Alert report is a well-established tool in SAP ecosystems for providing this data. It is generated from the on-premises SAP system (running on VMware vSphere with SQL Server in this case) and offers detailed insights into system performance and resource demands, which can be mapped to Azure VM SKUs (e.g., E-series or M-series for SQL Server-based SAP systems), storage options (e.g., Premium SSD), and network configurations. Why not Azure Monitor? It’s an Azure-native tool and cannot assess on-premises systems pre - Why not SAP HANA sizing report? It’s specific to SAP HANA, not Microsoft SQL Server. Why not Azure Advisor? It’s for post-deployment optimization in Azure, not pre-migration sizing.
50
You have an SAP production landscape in Azure that is hosted on virtual machines that run Windows Server and Red Hat Enterprise Linux. You need to monitor the virtual machines. The solution must ensure that you can collect logs from the virtual machines by using data collection rules (DCRs). What should you install on each virtual machine? the Log Analytics agent the Guest Configuration extension the Azure Monitor agent the Azure Diagnostics extension
Final Answer What should you install on each virtual machine? The Azure Monitor agent Why Azure Monitor Agent is the Best Fit DCR Integration: The question explicitly requires log collection using DCRs, and only the Azure Monitor agent supports this feature. SAP on Azure: In SAP landscapes, monitoring VMs (e.g., for SAP HANA on RHEL or SAP app servers on Windows) often involves collecting OS-level logs (syslog, Event Logs) and application-specific data, which the Azure Monitor agent handles efficiently via DCRs. Future-Proof: The Azure Monitor agent is Microsoft’s recommended solution for new deployments, aligning with the AZ-120 exam’s focus on current Azure technologies for SAP.
51
DRAG DROP You have An Azure subscription that contains an availability set named AS1 and a virtual machine named VM1. VM1 hosts an SAP NetWeavef application You need to ensure that AS1 includes VM1. Which four PowerShell cmdlets should you run in sequence? To answer, move the appropriare actions from the list of actions to the answer area and arrange them m the correct order. Cmdlets Set-AzVMOSDisk Remove-AzVM New-AzVM New-AzVMConfig Update-AzAvailabilitySet Answer Area
Final Answer Remove-AzVM New-AzVMConfig Set-AzVMOSDisk New-AzVM Step-by-Step Reasoning Remove-AzVM Why it’s correct: Since VM1 already exists but isn’t in AS1, and availability set membership can’t be changed post-creation, you must first delete the VM’s compute resource (the VM object) while preserving its disks and other resources (e.g., NIC, IP). The Remove-AzVM cmdlet deletes the VM object without affecting the underlying OS disk or data disks, which is critical for an SAP NetWeaver VM to retain its application and database state. Why first: This is the prerequisite step to recreate VM1 with the correct availability set assignment. New-AzVMConfig Why it’s correct: After deleting the VM, you need to create a new VM configuration object to define the properties of the recreated VM1, including its assignment to AS1. The New-AzVMConfig cmdlet initializes a VM configuration, allowing you to specify parameters like VM size and availability set (via the -AvailabilitySetId parameter). Why second: This step sets up the configuration before attaching resources like the OS disk and creating the VM. Set-AzVMOSDisk Why it’s correct: To recreate VM1 with its original OS disk (preserving the SAP NetWeaver setup), you attach the existing OS disk to the new VM configuration. The Set-AzVMOSDisk cmdlet links the preserved OS disk (from the deleted VM) to the configuration object created by New-AzVMConfig, ensuring continuity of the SAP environment. Why third: This step comes after defining the VM config but before creating the VM, as the disk must be attached to the configuration object. New-AzVM Why it’s correct: Finally, you create the new VM using the configured object (which now includes the availability set and OS disk). The New-AzVM cmdlet provisions the VM in Azure, placing it in AS1 as specified in the config, completing the task of ensuring AS1 includes VM1. Why fourth: This is the final step to deploy the VM after all configurations are set.
52
You have a Recovery Services vault backup policy for SAP HANA on an Azure virtual machine as shown in the following exhibit. HANA Backup Policy FULL BACKUP Backup Frequency Daily at 7:00 PM UTC Retention of daily backup point Retain backup taken every day at 7:00 PM for 7 Day(s) Retention of weekly backup point Retain backup taken every week on Sunday at 7:00 PM for 12 Week(s) Retention of monthly backup point Retain backup taken every month on First Sunday at 7:00 PM for 4 Month(s) Retention of yearly backup point Retain backup taken every year in January on First Sunday at 7:00 PM for 7 Year(s) LOG BACKUP Backup schedule Every 1 hour Retained for 7 days Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Answer Area The backup policy will support a recovery point objective (RPO) of [answer choice] for restoring HANA. The HANA logs can be rolled back for up to [answer choice].
Final Answers: The backup policy will support a recovery point objective (RPO) of 1 hour for restoring HANA. The HANA logs can be rolled back for up to 7 days. Why These Answers Are Correct for AZ-120: RPO (1 hour): The AZ-120 exam emphasizes understanding how backup frequencies impact recovery objectives. For SAP HANA on Azure, log backups every 1 hour minimize data loss to a 1-hour window, which is consistent with Microsoft’s documentation on Azure Backup for SAP HANA. Log Retention (7 days): The exam tests knowledge of retention policies and their implications for point-in-time recovery. The 7-day log retention directly defines how far back logs can be used, a critical aspect of SAP HANA disaster recovery planning on Azure. These answers align with the technical details in the exhibit and the exam’s focus on SAP workload administration.
53
HOTSPOT You are evaluating the proposed backup policy. For each of the following statements, select Yes if the statement is true. otherwise, select No. NOTE: Each correct selection is worth one point. Statements Yes No The backup policy meets the technical requirements. ⭘ ⭘ The backup policy meets the business requirements. ⭘ ⭘ If the backup policy is implemented, a deleted file can be restored to the running virtual machine one year after the file was deleted. ⭘ ⭘
Final Answer (Based on Assumptions) Statements: The backup policy meets the technical requirements: Yes The backup policy meets the business requirements: No If the backup policy is implemented, a deleted file can be restored to the running virtual machine one year after the file was deleted: Yes
54
You have an existing SAP production landscape on Azure. The SAP application virtual machines have static IP addresses. You need to replicate the virtual machines to another Azure region by using Azure Site Recovery. The source and target subnets have different address ranges. Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. Stop the virtual machines. Create a backup policy. Create a recovery plan. Modify the networking configuration. Replicate the virtual machines.
Final Answer Create a recovery plan Modify the networking configuration Replicate the virtual machines 1. Create a recovery plan Why it’s correct: A recovery plan in ASR defines how VMs are failed over and brought online in the target region. For an SAP landscape, which typically includes multiple interdependent VMs (e.g., application servers, database servers), a recovery plan ensures the correct startup order (e.g., database before application servers) and coordinated failover. This is critical for maintaining SAP application integrity during a DR scenario. SAP relevance: The AZ-120 exam emphasizes recovery planning for SAP workloads, as it’s a best practice to orchestrate failover of complex, multi-tier applications. Why necessary: Without a recovery plan, replication alone doesn’t guarantee a functional SAP landscape post-failover. 2. Modify the networking configuration Why it’s correct: Since the source and target subnets have different address ranges, the static IP addresses assigned to the VMs in the source region won’t work in the target region’s subnet. ASR allows you to modify the networking configuration (e.g., in the "Compute and Network" settings for each replicated VM) to specify new static IPs that align with the target subnet’s range. Process: Before or during replication setup, you can preconfigure the target network settings (e.g., virtual network, subnet, and IP addresses) to ensure the VMs can communicate post-failover. Why necessary: This addresses the specific challenge of differing subnet ranges, ensuring network connectivity in the DR region. 3. Replicate the virtual machines Why it’s correct: This is the core action of enabling ASR. You must configure replication for the SAP VMs by selecting them in the ASR vault, specifying the source region, target region, and replication settings (e.g., target resource group, storage). This initiates the replication process, copying VM disks to the target region continuously. SAP consideration: Replication ensures the SAP application and its data are available in the DR region with minimal data loss (low RPO), a key requirement for production landscapes. Why necessary: Without replication, there’s no DR capability, making this a mandatory step.
55
You deploy an SAP environment on Azure. You need to ensure that incoming requests are distributed evenly across the application servers. What should you use? SAP Web Dispatcher SAP Solution Manager SAP Control Azure Monitor
Correct Answer: SAP Web Dispatcher Why This Answer Is Correct for AZ-120: The AZ-120 exam focuses on planning and administering SAP workloads on Azure, including ensuring high availability and performance optimization. Distributing incoming requests evenly across SAP application servers requires a load balancing solution. The SAP Web Dispatcher is the SAP-provided tool explicitly designed for this purpose in SAP environments, handling HTTP/HTTPS traffic and integrating seamlessly with SAP application servers (e.g., dialog instances in NetWeaver or S/4HANA). Why not SAP Solution Manager? It’s a management tool, not a load balancer. Why not SAP Control? It’s an administrative utility, not a request distribution mechanism. Why not Azure Monitor? It’s for monitoring, not load balancing.
56
HOTSPOT - You have an on-premises SAP environment. Backups are performed by using tape backups. There are 50 TB of backups. A Windows file server has BMP images of checks used by SAP Finance. There are 9 TB of images. You need to recommend a method to migrate the images and the tape backups to Azure. The solution must maintain continuous replication of the images. What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Tape backups: AzCopy Azure Data Box Edge Azure Databox Azure Storage Explorer File server: AzCopy Azure Data Box Edge Azure Databox Azure Storage Explorer
Tape backups: Azure DataBox The Microsoft Azure Data Box cloud solution lets you send terabytes of data into Azure in a quick, inexpensive, and reliable way. The secure data transfer is accelerated by shipping you a proprietary Data Box storage device. Each storage device has a maximum usable storage capacity of 80 TB and is transported to your datacenter through a regional carrier. The device has a rugged casing to protect and secure data during the transit. File server: Azure Storage Explorer Azure Storage Explorer is an application which helps you to easily access the Azure storage account through any device on any platform, be it Windows, MacOS, or Linux. You can easily connect to your subscription and manipulate your tables, blobs, queues, and files.
57
You plan to automate a deployment of SAP NetWeave on Azure virtual machines by using Azure Resource Manager templates. The database tier will consist of two instances of an Azure Marketplace Microsoft SQL Server 2017 virtual machine image that each has 8 TB of RAM. Which task should you include in the templates used to deploy the SQL Server virtual machines? A. Enable buffer pool extensions in SQL Server. B. Enable read caching on the disks used to store the SQL Server database log files. C. Run the SQL Server setup and specify the /ACTION=REBUILDDATABASE and /SQLCOLLATION switches. D. Run the SQL Server setup and specify the /ACTION-INSTALL and /SQLMAXMEMORY switches.
Final Answer D. Run the SQL Server setup and specify the /ACTION=INSTALL and /SQLMAXMEMORY switches. Why Option D Is Correct /ACTION=INSTALL: When deploying a SQL Server VM from an Azure Marketplace image, the base OS and SQL Server binaries are typically pre-installed. However, to fully configure SQL Server (e.g., setting up the instance, features, and initial settings), you may need to run the SQL Server setup. The /ACTION=INSTALL switch ensures the SQL Server instance is installed and configured as part of the deployment, which is a plausible step if the Marketplace image requires additional setup for SAP. In ARM templates, this can be automated using a CustomScriptExtension to execute a setup command. /SQLMAXMEMORY: This switch sets the maximum memory SQL Server can use, which is critical for SAP workloads on Azure. With large RAM (assumed to be significant, e.g., 128 GB or more in an M-series VM), you must limit SQL Server’s memory to leave sufficient RAM for the OS and other processes (e.g., SAP application servers if co-located). Microsoft recommends setting this to 80-90% of total RAM for SAP deployments. Example: For a VM with 128 GB RAM, you might set /SQLMAXMEMORY=115200 (115 GB in MB) to optimize performance and stability. This configuration is a best practice for SAP on Azure and can be scripted in the ARM template.
58
HOTSPOT You plan to deploy two Azure virtual machines that will host an SAP HANA database for an SAP landscape. The virtual machines will be deployed to the same availability set. You need to meet the following requirements: * Ensure that the virtual machines support disk snapshots. * Ensure that the virtual machine disks provide submillisecond latency for writes. * Ensure that each virtual machine can be allocated disks from a different storage cluster. Which type of operating system disk and HANA database disk should you use? To answer, select the appropriate options in the answer area. NOTE Each correct selection is worth one point. Answer Area Operating system disk: Azure NetApp Files Premium storage Ultra disk HANA database disk: Azure NetApp Files Premium storage Ultra disk
Final Answer Operating system disk: Premium storage HANA database disk: Ultra disk Why These Selections Are Correct? Premium storage (OS Disk): Provides reliable performance (IOPS, throughput) for OS operations, supports snapshots, and ensures disks are on separate storage clusters via availability set fault domains. Cost-effective and SAP-supported for HANA VM OS disks. Ultra disk (HANA Database Disk): Delivers submillisecond write latency, critical for HANA’s /hana/log and /hana/data, supports snapshots, and ensures separate storage clusters for HA. Aligns with SAP HANA’s stringent storage requirements for production workloads. SAP HANA Fit: Ensures the HANA database (e.g., 400 GB, as in prior questions) performs optimally with low-latency writes and reliable backups, critical for production.
59
Your on-premises network contains the following: * A 1-Gbps internet connection * An SAP HANA 1.0 instance that has a 4-TB database * An SAP landscape that uses SUSE Linux Enterprise (SLES) 12 You have an Azure subscription that contains a virtual machine. The virtual machine is of the M64s SKU and runs SLES 15 and HANA 2.0. You need to migrate the database to the virtual machine and upgrade the database to HANA 2.0. The solution must meet the following requirements: * The migration must be performed during a weekend. * The database can be offline during the migration. Which migration method should you use? Azure Data Box HANA database backup and log shipping Azure Migrate HANA database export and import
Final Answer Which migration method should you use? Azure Data Box Why “Azure Data Box” is Correct? Meets Weekend Requirement: Local copy to Data Box (~12–15 hours) and Azure restore/upgrade (~10–15 hours) total ~22–30 hours, fitting within a 48-hour weekend. Shipping is pre-planned (before the weekend), ensuring data is in Azure by migration start. Supports Offline Migration: Database is offline during export and restore, aligning with the requirement. Handles 4-TB Size: Data Box supports large datasets (up to 80 TB), ideal for 4 TB, bypassing the slow 1 Gbps internet (~22–24 hours). Enables HANA 2.0 Upgrade: Restore/import to HANA 2.0 on the M64s VM includes the upgrade, using standard HANA tools. SAP Fit: Recommended by Microsoft for large HANA migrations (SAP Note 2526052), ensuring reliability and speed for production landscapes.
60
You plan to migrate an SAP environment to Azure. You need to design an Azure network infrastructure to meet the following requirements: * Prevent end users from accessing the database servers. * Isolate the application servers from the database servers. * Ensure that end users can access the SAP systems over the internet Minimize the costs associated to the communications between the application servers and database servers Which two actions should you include in the solution? Each correct answer presents pan of the solution. NOTE: Each correct selection is worth one point. Configure Azure Traffic Manager to route incoming connections. Configure an infernal Azure Standard Load Balancer for incoming connections. Segregate the SAP application servers and database servers by using different Azure virtual networks. In the same Azure virtual network, segregate the SAP application service and database servers by using different subnets and network security groups. Create a site-to-site VPN between the on premises network and Azure.
Final Answer In the same Azure virtual network, segregate the SAP application servers and database servers by using different subnets and network security groups. Configure an internal Azure Standard Load Balancer for incoming connections. Why This Selection Is the Closest to Correct for AZ-120 For the AZ-120 exam, the focus is on designing Azure infrastructure for SAP workloads with security, performance, and cost efficiency in mind. The chosen actions: Single VNet with subnets and NSGs: Aligns with Microsoft’s recommended SAP network architecture (e.g., hub-and-spoke or single-VNet designs), ensuring isolation and cost efficiency. Internal load balancer: Supports SAP application tier accessibility and high availability while keeping the database tier secure, a common pattern in SAP on Azure deployments. These actions collectively meet all four requirements: Prevent database access: NSGs block end-user traffic to the database subnet. Isolate tiers: Subnets and NSGs enforce separation. Internet access: Internal load balancer supports application-tier connectivity (assuming an external endpoint exists). Minimize costs: Same-VNet design avoids inter-VNet charges.
61
You have an SAP landscape on Azure that uses SAP HANA You perform a daily backup of HANA to Azure Blob Storage and retain copies of each backup for one year You need to reduce the backup storage costs. What should you implement? a stored access policy a Recovery Services Vault backup policy a storage access tier
Correct Answer: A storage access tier Why This Answer Is Correct for AZ-120: Cost Reduction Focus: The AZ-120 exam tests knowledge of optimizing Azure resources for SAP workloads, including cost management. The scenario involves daily SAP HANA backups stored in Azure Blob Storage for one year. By default, Blob Storage uses the Hot tier, which is expensive for long-term retention. Transitioning backups to the Cool tier (for infrequent access) or Archive tier (for rare access, e.g., older backups) directly lowers storage costs, aligning with the goal of reducing backup storage expenses. Practicality: Backups older than a few days or weeks are rarely accessed, making the Cool tier suitable (e.g., $0.015 vs. $0.0228 per GB/month in Hot tier, US East pricing as of 2025). For backups retained longer (e.g., months 6–12), the Archive tier ($0.002 per GB/month) could further cut costs, though retrieval costs apply if restoration is needed. Azure Blob lifecycle management policies can automate tier transitions (e.g., move to Cool after 30 days, Archive after 90 days), a concept covered in AZ-120. Why not Stored Access Policy? It’s about access control, not storage cost, and doesn’t address the requirement. Why not Recovery Services Vault? RSV is a backup management solution, not a cost-saving feature for existing Blob Storage backups. It’s more relevant for setting up backups initially, not optimizing an existing setup.
62
DRAG DROP You deploy an SAP environment on Azure. You need to configure SAP NetWeaver to authenticate by using Azure Active Directory (Azure AD). Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions | Answer Area Configure SAML single sign-on (SSO). Add SAP NetWeaver from the Azure AD application gallery. Add SAP Cloud Platform Identity from the Azure AD application gallery. Create and upload the service provider metadata file to Azure AD. Upload the FederationMetadata.xml file to the SAP NetWeaver Trusted Providers. Implement Active Directory Federation Services (AD FS).
Final Answer (Sequence) Add SAP NetWeaver from the Azure AD application gallery. Configure SAML single sign-on (SSO). Create and upload the service provider metadata file to Azure AD. Upload the FederationMetadata.xml file to the SAP NetWeaver Trusted Providers. Correct Sequence Rationale Add SAP NetWeaver from the Azure AD application gallery: Register the app in Azure AD to start the process. Configure SAML single sign-on (SSO): Set up SAML in Azure AD to define the IdP behavior and generate metadata. Create and upload the service provider metadata file to Azure AD: Provide NetWeaver’s metadata to Azure AD to establish trust from the IdP side. Upload the FederationMetadata.xml file to the SAP NetWeaver Trusted Providers: Configure NetWeaver to trust Azure AD, completing the SAML SSO setup.
63
You plan to migrate an on-premises SAP development system to Azure. Before the migration, you need to check the usage of the source system hardware, such as CPU, memory, network, etc. Which transaction should you run from SAP GUI? SM51 DB01 DB12 ST06
Correct Answer: ST06 Why This Answer Is Correct for AZ-120: Requirement Fit: The AZ-120 exam emphasizes pre-migration planning for SAP workloads on Azure, including accurately sizing the target environment (e.g., VM type, storage). ST06 provides detailed OS-level metrics (CPU, memory, network) from the source SAP system, enabling you to map these to appropriate Azure resources (e.g., E-series or M-series VMs). SAP Standard Tool: ST06 is widely used by SAP administrators to monitor hardware performance, as documented in SAP Help and performance tuning guides. It’s a practical choice for assessing the source system’s workload before migration. Why Not the Others? SM51: Shows server status, not resource usage details. DB01: Focuses on database locks, not hardware metrics. DB12: Deals with backups, not system performance.
64
You are deploying SAP Fiori lo an SAP environment on Azure. You are configuring SAML 2.0 for an SAP Fiori instance named FPP that uses client 100 to authenticate to an Azure Active Directory (Azure AD) tenant. Which provider name should you use to ensure that the Azure AD tenant recognizes the SAP fiori instance? ldap://FPP https://FPP ldap://FPP-100 https://FPP100
Final Answer Which provider name should you use? https://FPP100 https://FPP100 What it is: An HTTPS URL combining the system name (FPP) and client (100). Why Correct: SAML Compatibility: SAML entity IDs are typically URLs (often HTTPS), and Azure AD expects this format for service providers. SAP Fiori Convention: For SAP Fiori, the entity ID often includes the system name and client (e.g., https://) to ensure uniqueness. Here, https://FPP100 identifies the Fiori instance on client 100 of the FPP system. Azure AD Recognition: When configuring SAML SSO in Azure AD (e.g., via the enterprise application for SAP Fiori), the entity ID must match the identifier defined in Fiori’s SAML 2.0 settings. https://FPP100 aligns with this requirement. AZ-120 Context: The exam emphasizes precise configuration for SAP-Azure AD integration, and this format is consistent with best practices.
65
HOTSPOT Your on-premises network contains SAP and non-SAP applications. ABAP-based SAP systems are integrated with IDAP and use user name/password-based authentication for logon. You plan to migrate the SAP applications to Azure. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements | Yes | No Azure Active Directory (Azure AD) pass-through authentication enables users to connect to the ABAP-based SAP systems on Azure by using their on-premises user name/password. ( ) ( ) Azure Active Directory (Azure AD) password hash synchronization enables users to connect to the ABAP-based SAP systems on Azure by using their on-premises user name/password. ( ) ( ) Active Directory Federation Services (AD FS) supports authentication between on-premises Active Directory and Azure systems that use different domains. ( ) ( )
Correct Answer: Statement 1: Azure Active Directory (Azure AD) pass-through authentication enables users to connect to the ABAP-based SAP systems on Azure by using their on-premises username/password. No Statement 2: Azure Active Directory (Azure AD) password hash synchronization enables users to connect to the ABAP-based SAP systems on Azure by using their on-premises username/password. No Statement 3: Active Directory Federation Services (AD FS) supports authentication between on-premises Active Directory and Azure systems that use different domains. Yes Why Correct? Statement 1 (No): Azure AD pass-through authentication is designed for Azure AD-integrated applications, not ABAP-based SAP systems, which rely on LDAP, Kerberos, or SAML. SAP ABAP systems do not natively support Azure AD’s authentication protocols without significant reconfiguration (e.g., SAML setup), making pass-through authentication unsuitable for direct SAP logon. In an Azure migration, SAP systems typically continue to use on-premises AD or a synchronized AD instance in Azure, not Azure AD’s pass-through authentication. Statement 2 (No): Password hash synchronization syncs password hashes to Azure AD for cloud-based authentication, but ABAP-based SAP systems do not integrate with Azure AD natively. They require LDAP, Kerberos, or SAML, none of which leverage password hash synchronization directly. Like pass-through authentication, this method is irrelevant to SAP ABAP’s authentication stack in a standard migration scenario. Statement 3 (Yes): AD FS supports federated authentication across different domains, enabling on-premises AD to authenticate users for Azure systems (including SAP systems configured for SAML). This is a standard approach in hybrid identity scenarios and aligns with Azure’s identity management capabilities for SAP workloads.
66
ou have an on-premises SAP landscape and a hybrid Azure AD tenant. You plan to enable Azure AD authentication for SAP NetWeaver. What should you configure first in Azure AD? a conditional access policy an Azure AD Application Proxy a service principal a user flow
Correct Answer: A service principal Why This Answer Is Correct for AZ-120: Initial Step: To enable Azure AD authentication for SAP NetWeaver, the first step in Azure AD is to register SAP NetWeaver as an enterprise application, which creates a service principal. This establishes the identity and trust relationship between Azure AD and SAP NetWeaver, enabling SAML-based SSO (the most common method for SAP NetWeaver authentication with Azure AD). You configure the SAML metadata (e.g., entity ID, reply URL) and certificates as part of this process. Hybrid Context: In a hybrid Azure AD tenant (with Azure AD Connect syncing on-premises AD), the service principal allows Azure AD to authenticate users for the on-premises SAP system via SSO, leveraging the synced identities. AZ-120 Relevance: The exam tests knowledge of integrating SAP workloads with Azure services, including identity management. Microsoft and SAP documentation (e.g., SAP Note 2629510) outline registering SAP NetWeaver in Azure AD as the starting point for SSO, followed by configuring SAML on the SAP side (e.g., via transaction SAML2).
67
ou plan to migrate an SAP HANA instance to Azure. You need to gather CPU metrics from the last 24 hours from the instance. Solution: You use Monitoring from the SAP HANA Cockpit. Does this meet the goal? Yes No
Final Answer Does this meet the goal? Yes Why “Yes” is Correct? Capability: SAP HANA Cockpit’s Monitoring feature can display CPU usage metrics for the HANA instance, including historical data for the last 24 hours, meeting the technical requirement. Accessibility: Since the instance is still on-premises (pre-migration), HANA Cockpit is already available as a native SAP tool, requiring no additional Azure setup at this stage. AZ-120 Alignment: The exam often involves using SAP-native tools (like HANA Cockpit) to collect performance data for migration planning, rather than Azure tools (e.g., Azure Monitor), which apply post-migration.
68
HOTSPOT You have an on-premises SAP NetWeaver production landscape and an Azure subscription that contains the resources shown in the following table. Name Description Location SAPDB1 Solaris SPARC server that runs an Oracle database of 10 TB On-premises Vnet1 Azure virtual network Azure SAPSQLVM1 Azure virtual machine that runs Microsoft SQL Server 2017 and connects to VNet1 Azure SAPEXP1 Intel server that runs Windows Server On-premises SAPEXP2 Intel server that runs Windows Server On-premises SAPEXP3 Intel server that runs Windows Server On-premises SAPEXP4 Intel server that runs Windows Server On-premises SAPIMP1 Azure virtual machine that runs Windows Server and connects to VNet1 Azure You have a 10-Gbps ExpressRoute circuit between the on-premises environment and VNet1. You plan to migrate the landscape to Azure. As part of the solution, you need to migrate the on-premises Oracle database to SAPSQLVM1. The solution must minimize how long it will take to complete the data migration. What should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area To export the Oracle database: RMAN R3load Azure Import/Export To transfer the database files to Azure before the import: R3load Robocopy R3ta Azure Import/Export
Final Answer To export the Oracle database: RMAN To transfer the database files to Azure before the import: Azure Import/Export To export the Oracle database: RMAN Why it’s correct: RMAN (Recovery Manager) is Oracle’s native backup and recovery tool, widely used for exporting Oracle databases during SAP migrations. It can create a consistent backup of the database, which is a standard step in heterogeneous SAP migrations (e.g., Oracle to SQL Server). SAP Migration Context: For SAP systems, RMAN is often used with the SAP Database Migration Option (DMO) or classical migration methods to export the database. The backup files can then be converted (e.g., using SAP Software Provisioning Manager or third-party tools) for import into SQL Server. Time Efficiency: RMAN optimizes the export process by leveraging Oracle’s internal mechanisms, ensuring a reliable and fast backup of the 10 TB database on the Solaris server. To transfer the database files to Azure before the import: Azure Import/Export Why it’s correct: Azure Import/Export allows you to transfer large datasets (like a 10 TB database backup) by shipping physical disks to an Azure data center. The data is then uploaded to Azure Blob Storage, from where it can be accessed by SAPSQLVM1 for import into SQL Server. Time Minimization: Transferring 10 TB over a 10-Gbps ExpressRoute circuit would take approximately: 10 Gbps = 1.25 GB/s (gigabytes per second). 10 TB = 10,240 GB. Time = 10,240 GB ÷ 1.25 GB/s ≈ 8,192 seconds ≈ 2.28 hours (assuming full bandwidth utilization). In practice, network latency, contention, and overhead could extend this to 3-5 hours or more, plus additional time for synchronization and validation. With Azure Import/Export, shipping disks (e.g., overnight) and uploading to Azure (typically 1-2 days total) is faster for such a large dataset, especially considering setup and import time constraints.
69
DRAG DROP You have an Azure subscription that contains a highly available SAP NetWeaver deployment. The deployment contains four virtual machines. You need to monitor the NetWeaver deployment by using Azure Monitor for SAP Solutions. During the implementation of Azure Monitor for SAP Solutions, downtime of the deployment must be minimized. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions From the SAP Management Console, restart the sapstartsrv service. From the SAP Management Console, restart the SAP system. From the Azure portal, deploy the Azure Monitor for SAP Solutions managed resource group and configure the NetWeaver provider. On each virtual machine, install node_exporter. From the SAP GUI, connect to the SAP system and modify the instance profiles. Answer Area 1 2 3
Answer Area: From the Azure portal, deploy the Azure Monitor for SAP Solutions managed resource group and configure the NetWeaver provider. On each virtual machine, install node_exporter. From the SAP Management Console, restart the sapstartsrv service. Correct Sequence: From the Azure portal, deploy the Azure Monitor for SAP Solutions managed resource group and configure the NetWeaver provider. Why First: This is the initial step to set up AMS in Azure. It creates the monitoring infrastructure (resource group, AMS resource) and configures the NetWeaver provider to connect to the SAP system. No downtime is incurred, as it’s an Azure-side action. You specify the SAP system details (e.g., hostname, instance number) during configuration. Exam Context: AZ-120 emphasizes deploying Azure monitoring tools as a foundational step. On each virtual machine, install node_exporter. Why Second: AMS requires OS-level metrics (CPU, memory, network) from the VMs hosting SAP NetWeaver. The node_exporter (a Prometheus tool) is commonly installed on each VM to expose these metrics to AMS. Installation itself doesn’t cause downtime, though a service restart might be needed later to activate it. This prepares the VMs for monitoring. Exam Context: AZ-120 covers integrating Azure monitoring with SAP VMs, often via exporters. From the SAP Management Console, restart the sapstartsrv service. Why Third: After installing node_exporter, restarting the sapstartsrv service on each VM ensures the SAP instance recognizes the new monitoring setup (e.g., exposing metrics). This restart is per-instance, not system-wide, minimizing downtime compared to a full SAP system restart. AMS can then collect data seamlessly. Exam Context: Restarting services to apply monitoring changes is a practical step in SAP administration.
70
You have an Azure subscription that contains an SAP HANA on Azure (Large Instances) deployment. The deployment is forecasted to require an additional 256 GB of storage. What is the minimum amount of additional storage you can allocate? 256 GB 512 GB 1TB 2 TB
Final Answer What is the minimum amount of additional storage you can allocate? 1 TB 1 TB Why Correct: Minimum Increment: Microsoft documentation and SAP HANA on Azure Large Instances guidelines specify that additional storage is allocated in 1-TB increments. This is the smallest amount you can add to an existing HLI deployment. Meets Requirement: 1 TB exceeds the forecasted need of 256 GB, ensuring the deployment has sufficient capacity while adhering to HLI’s provisioning rules. Process: You request the storage increase via a support ticket, and Microsoft adds 1 TB (or more, in 1-TB multiples) to the HLI unit’s storage pool (e.g., /hana/data or /hana/log volumes).
71
You have an on-premises SAP NetWeaver development landscape that contains the resources shown in the following table. Name Description SAPDB1 Hyper-V virtual machine that runs Microsoft SQL Server 2017 and contains a 30-TB database SAPSRV1 Hyper-V virtual machine that runs Windows Server You have a 500-Mbps ExpressRoute circuit between the on-premises environment and a virtual network. You plan to migrate the landscape to Azure. What should you include in the solution? Azure Site Recovery Microsoft System Center 2019 - Data Protection Manager (DPM 2019) Azure Data Box Azure Backup Server
Correct Answer: Azure Data Box Why Correct: Bandwidth Constraint: The 500-Mbps ExpressRoute is insufficient for transferring 30 TB in a reasonable timeframe. At 500 Mbps, transferring 30 TB would take over 7 months (480,000 seconds ÷ 86,400 seconds/day ≈ 5.56 days/TB × 30 TB ≈ 166.7 days, assuming full utilization and no overhead). This makes online replication via ASR impractical for the initial migration. Large Database: Azure Data Box is designed for scenarios with large datasets (e.g., 30 TB), allowing offline transfer of the SQL Server database from SAPDB1. After shipping the Data Box to Azure, the data is uploaded to Azure storage, and you can then attach it to a new Azure VM running SQL Server. SAP Migration: For SAP NetWeaver, migrating the database (SAPDB1) is the bottleneck due to its size. The application server (SAPSRV1) is smaller and could use ASR or manual redeployment, but the question focuses on the overall solution, and Data Box addresses the critical 30-TB challenge. Cost and Efficiency: Data Box minimizes migration time and leverages ExpressRoute for smaller post-migration syncs (e.g., delta changes), aligning with cost-effective planning for SAP on Azure. AZ-120 Relevance: The exam tests knowledge of migration strategies for SAP workloads, including handling large databases. Data Box is a recommended tool for offline data transfer in Azure’s SAP migration documentation when bandwidth is limited.
72
HOTSPOT You have an Azure alert rule and action group as shown in the following exhibit. PS Azure:\> Get-AzMetricAlertRuleV2 | Select WindowSize, EvaluationFrequency, Actions -ExpandProperty Criteria WindowSize : 00:05:00 EvaluationFrequency : 00:01:00 Actions : {/subscriptions/6dce0667-3896-4f0b-bcc4-1ea4da2de0dc/resourcegroups/resourcegroup1/ providers/microsoft.insights/actiongroups/admins} Name : Metric MetricName : Percentage CPU MetricNamespace : Microsoft.Compute/virtualMachines OperatorProperty : GreaterThan TimeAggregation : Average Threshold : 85 Dimensions : {} AdditionalProperties : {} PS Azure:\> Get-AzActionGroup | Select -ExcludeProperty ResourceGroupName, Tags, Location GroupShortName : admins Enabled : True EmailReceivers : {admins-_EmailAction-} SmsReceivers : {} WebhookReceivers : {} Id : /subscriptions/6dce0667-3896-4f0b-bcc4-1ea4da2de0dc/resourcegroups/resourcegroup1/providers/ microsoft.insights/actiongroups/admins Name : admins Type : Microsoft.Insights/actiongroups GroupShortName : restartVM Enabled : True EmailReceivers : {} SmsReceivers : {} WebhookReceivers : {} Id : /subscriptions/6dce0667-3896-4f0b-bcc4-1ea4da2de0dc/resourcegroups/resourcegroup1/providers/ microsoft.insights/actiongroups/restartVM Name : restartVM Type : Microsoft.Insights/actiongroups Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Question 1: "The admins action group will be notified if the average CPU usage rises above 85% for" One minute Five minutes One second Question 2: "The [answer choice] when the alert is triggered" Admins action group will be emailed RestartVM action group will be emailed Virtual machines will restart
Final Answers: Statement Answer "The admins action group will be notified if the average CPU usage rises above 85% for": Five minutes "The [answer choice] when the alert is triggered": Admins action group will be emailed Why Correct (Summary): Five Minutes: The WindowSize of 5 minutes defines the evaluation period for the average CPU metric. The alert checks this every minute (EvaluationFrequency), but the condition must hold over the full 5-minute window, making “Five minutes” the correct duration. AZ-120 Context: Understanding alert rule parameters like WindowSize is key for SAP monitoring scenarios. Admins Action Group Will Be Emailed: The alert rule links only to the admins action group, which has an email receiver configured. No other actions (e.g., VM restart or notifying restartVM) are indicated. AZ-120 Context: The exam tests configuring alerts and action groups for SAP workloads, and this matches a typical notification setup.
73
HOTSPOT You are planning the Azure network infrastructure for an SAP environment. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Statements Yes No You can segregate the SAP application layer and the DBMS layer into different virtual networks that are peered by using Global VNet peering. ( ) ( ) You can segregate the SAP application layer and the DBMS layer into different subnets in the same virtual network. ( ) ( ) If you segregate the SAP application layer and the DBMS layer into different peered virtual networks, you will incur costs for the data transferred between the virtual networks. ( ) ( )
Final Answers: You can segregate the SAP application layer and the DBMS layer into different virtual networks that are peered by using Global VNet peering. Yes (Global VNet peering enables this segregation across VNets, even if standard peering could also apply.) You can segregate the SAP application layer and the DBMS layer into different subnets in the same virtual network. Yes (A cost-effective and common practice for SAP on Azure within a single VNet.) If you segregate the SAP application layer and the DBMS layer into different peered virtual networks, you will incur costs for the data transferred between the virtual networks. Yes (Data transfer between peered VNets always incurs costs in Azure.) Why These Are Correct for AZ-120: Network Design: The AZ-120 exam tests knowledge of Azure network infrastructure for SAP, including VNet peering (standard and global) and subnet segregation, which are core design patterns. Cost Awareness: Understanding cost implications (e.g., VNet peering charges vs. subnet usage) is critical for planning SAP deployments, a key exam focus. SAP Best Practices: Both approaches (separate VNets or subnets) align with Azure’s SAP reference architectures, but the cost distinction is a practical decision point.
74
You have an existing on-premises SAP landscape that is hosted on VMware VSphere. You plan to migrate the landscape to Azure. You configure the Azure Site Recovery replication policy shown in the following exhibit. Default Policy 🖉 Edit settings 📎 Associate 🗑 Delete Replication settings Source type: VMware/Physical machines Target type: Azure RPO threshold: 60 Minutes Recovery point retention: 24 Hours App consistent snapshot frequency: 120 Minutes Associated Configuration Servers Name Association status Config01 🟢 Associated Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Answer Area During the migration, you can fail over to a recovery point taken up to [dropdown] ago. 60 minutes 120 minutes 24 hours 0 minutes After a planned failover, up to the last [dropdown] of SAP data might be lost. 60 minutes 120 minutes 24 hours 0 minutes
Final Answers: Statement Answer "During the migration, you can fail over to a recovery point taken up to [dropdown] ago." 24 hours "After a planned failover, up to the last [dropdown] of SAP data might be lost." 0 minutes Why Correct (Summary): 24 Hours: The Recovery Point Retention of 24 hours defines how far back recovery points are available. During migration, you can choose any point within this 24-hour window, making “24 hours” the maximum and correct answer. AZ-120 Context: Understanding recovery point retention is key for SAP migration planning with ASR. 0 Minutes: A planned failover in ASR ensures all data is replicated before switching to Azure, resulting in no data loss for SAP systems. The 60-minute RPO threshold is for monitoring replication health, not the outcome of a planned process. AZ-120 Context: The exam tests knowledge of ASR for SAP migrations, and planned failover’s zero-data-loss capability is a critical distinction from unplanned scenarios.
75
DRAG DROP You have an on-premises network and an Azure subscription. You plan to deploy a standard three-tier SAP architecture to a new Azure virtual network. You need to configure network isolation for the virtual network. The solution must meet the following requirements: * Allow client access from the on-premises network to the presentation servers. * Only allow the application servers to communicate with the database servers. * Only allow the presentation servers to access the application servers. * Block all other inbound traffic. What is the minimum number of network security groups (NSGs) and subnets required? To answer, drag the appropriate number to the correct targets. Each number may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE Each correct selection is worth one point. Number | Answer Area 1 2 3 4 NSGs: [] Subnets: []
Final Answers: NSGs: 3 Subnets: 3 Why These Are Correct for AZ-120: Subnets (3): The AZ-120 exam emphasizes SAP’s three-tier architecture on Azure, where each layer (presentation, application, database) is segregated into its own subnet for security and manageability. This aligns with Azure’s SAP reference architectures (e.g., hub-and-spoke or single VNet designs). NSGs (3): Network isolation in Azure relies on NSGs to enforce traffic rules. The minimum of 3 NSGs ensures each tier’s unique access requirements are met without compromising the "block all other inbound traffic" mandate. Fewer NSGs would require overly complex rules or fail to isolate traffic properly, which the exam tests against. Efficiency: The solution uses the minimum resources (3 subnets, 3 NSGs) to meet all requirements, reflecting cost-effective and practical design skills assessed in AZ-120. Number Selection: NSGs: Drag "3" to the NSGs target. Subnets: Drag "3" to the Subnets target.
76
DRAG DROP You have an SAP landscape on Azure that contains the virtual machines shown in the following table. Name Configuration DB1 Microsoft SQL Server 2017 HANA1 SAP HANA 2.0 WEB01 SAP Web Dispatcher that runs on Windows Server 2019 You need to recommend a recovery solution in the event of an Azure regional outage. The solution must meet the following requirements: * Minimize costs. * Minimize data loss. * Minimize administrative effort. What should you recommend for each virtual machine? To answer, drag the appropriate services to the correct virtual machines. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Services: An AlwaysOn availability group An application group Azure Backup Azure Site Recovery HANA system replication Geo-zone-redundant storage (GZRS) Answer Area: DB1: [] HANA1: [] WEB01: [_________]
Final Answer: DB1: Azure Site Recovery HANA1: Azure Site Recovery WEB01: Azure Site Recovery Why Correct (Summary): Azure Site Recovery (ASR) for all three VMs: Minimizes Costs: No active secondary VMs are needed until failover, unlike AlwaysOn AGs or HSR, which require standby instances. ASR’s pricing is based on replication (e.g., ~$25/month per instance), making it cost-effective. Minimizes Data Loss: Offers low RPO (typically 5-15 minutes for VMware/VM replication), sufficient for most SAP components. While AlwaysOn AGs and HSR can achieve near-zero RPO, the question prioritizes cost and effort alongside data loss, and ASR’s RPO is acceptable. Minimizes Administrative Effort: ASR automates replication, failover, and failback, requiring less setup and management than configuring AlwaysOn AGs (SQL Server clustering) or HSR (HANA-specific replication). Regional Outage: ASR replicates to a paired Azure region, ensuring DR across regions, which aligns with the scenario.
77
DRAG DROP You have an Azure Active Directory (Azure AD) tenant and an SAP Cloud Platform tenant. You need to ensure that users sign in automatically by using their Azure AD accounts when they connect to SAP Cloud Platform. Which four actions should you perform in sequence? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order. Actions | Answer Area From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file. From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app. From the Azure Active Directory admin center, configure the SAP Cloud Platform Identity app to use the Federation Metadata XML file. From the Azure Active Directory admin center, download the Federation Metadata XML file. Configure the SAML settings for the Identifier and Reply URL.
Final Answer (Sequence): From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app. From the Azure Active Directory admin center, download the Federation Metadata XML file. From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file. Configure the SAML settings for the Identifier and Reply URL. Step-by-Step: From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app. Start by adding the SAP Cloud Platform Identity Authentication app in Azure AD to initiate SSO configuration. This is the first step to integrate the two systems. From the Azure Active Directory admin center, download the Federation Metadata XML file. Download Azure AD’s metadata file, which SAP Cloud Platform needs to recognize Azure AD as the IdP. This must happen before configuring SAP’s side. From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file. Upload Azure AD’s metadata to SAP Cloud Platform IAS to establish trust and configure Azure AD as the corporate IdP. This enables SAP to redirect authentication requests to Azure AD. Configure the SAML settings for the Identifier and Reply URL. In Azure AD, configure the SAP app’s SAML settings with the Identifier (Entity ID) and Reply URL from SAP Cloud Platform IAS. This step finalizes the SSO setup, ensuring Azure AD knows where to send SAML assertions. (Note: Typically, you’d get these from SAP IAS metadata or documentation, but the actions list implies manual configuration here.)
78
You have an SAP production landscape that uses SAP HANA databases on Azure. The HANA database server is a Standard.M32ms Azure virtual machine that has 864 GB of RAM. The HANA database is 400 GB. You expect the database to grow by 40 percent during the next 12 months. You resize the HANA database server virtual machine to Standard_m64ms and ,024 GB of RAM. You need to recommend additional changes to minimize performance degradation caused by database growth What should you recommend for the HANA database server? Increase the number of vCPUs. Configure additional disks Add a secondary network interface. Add a scale out node.
Correct Answer: Configure additional disks Why Correct? Performance Degradation: As the HANA database grows from 400 GB to 560 GB (40% increase), I/O demands on /hana/data and /hana/log will rise. The resize to M64ms addresses RAM (1,024 GB) and CPU (64 vCPUs), but disk performance could become a bottleneck without scaling storage capacity or throughput. SAP HANA on Azure: Azure’s SAP HANA deployment guides (e.g., for M-Series) recommend using multiple striped disks for /hana/data and /hana/log to meet IOPS and throughput needs. Adding disks ensures I/O keeps pace with growth, minimizing degradation. Minimizes Costs: Adding Premium SSDs is cheaper than resizing to a larger VM or adding a scale-out node, aligning with cost efficiency. Minimizes Effort: Configuring additional disks (e.g., via Azure portal or CLI, then updating LVM) is straightforward compared to scale-out or NIC changes. AZ-120 Context: The exam tests optimizing SAP HANA infrastructure on Azure. Disk configuration is a common recommendation for handling database growth, especially on memory-optimized VMs like M-Series.