test2 Flashcards
(78 cards)
This question requires that you evaluate the underlined BOLD text to determine if it is correct.
You have an Azure resource group that contains the virtual machines for an SAP environment.
You must be assigned the Contributor role to grant permissions to the resource group.
Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct.
No change is needed
User Access Administrator
Managed Identity Contributor
Security Admin
The correct answer is: User Access Administrator
Explanation:
The underlined bold text states: “You must be assigned the Contributor role to grant permissions to the resource group.” This is incorrect. The Contributor role in Azure allows a user to manage resources (e.g., create, delete, or modify virtual machines, storage, etc.) within a resource group, but it does not grant the ability to assign permissions or roles to others. In Azure Role-Based Access Control (RBAC), the ability to grant permissions (i.e., assign roles) requires a role that includes permissions for managing access, such as the Microsoft.Authorization/roleAssignments/* action.
User Access Administrator: This role allows a user to manage access to Azure resources by assigning roles to others. It includes the necessary permissions (e.g., Microsoft.Authorization/*) to grant permissions to a resource group. This makes it the correct choice for the statement to be accurate in the context of the Azure AZ-120 exam, which focuses on planning and administering Azure for SAP workloads. Managing access to resources like virtual machines in an SAP environment often involves assigning roles, and this role fits that requirement.
Contributor: As mentioned, this role can manage resources but cannot assign roles or grant permissions to others. Thus, the original statement is incorrect.
Managed Identity Contributor: This role is specific to managing user-assigned managed identities and does not provide broad permissions to grant access to a resource group. It’s too narrow for this scenario.
Security Admin: This role is related to managing security policies and configurations in Microsoft Defender for Cloud, not for granting permissions to resource groups in the context of RBAC.
Why “User Access Administrator” is correct:
In the context of the AZ-120 exam, which deals with SAP workloads on Azure, you might need to grant permissions to a resource group containing virtual machines to ensure proper management of the SAP environment. The User Access Administrator role aligns with this need because it allows you to delegate access by assigning roles (e.g., Contributor, Reader, or custom roles) to other users, groups, or service principals at the resource group scope. This is a common administrative task in Azure when setting up and securing SAP environments.
Thus, the corrected statement would be: “You must be assigned the User Access Administrator role to grant permissions to the resource group.”
You have an SAP environment on Azure.
our on-promises network connects to Azure by using a site-to-site VPN connection.
6u need to alert technical support if the network bandwidth usage between the on-premises network and Azure exceeds 900 Mbps 10 minutes.
What should you use?
Azure Network Watcher
NIPING
Azure Monitor
Azure Enhanced Monitoring for SAP
Correct Answer: Azure Monitor
Why It’s Correct:
Azure Monitor can collect real-time network performance metrics from the site-to-site VPN (via the Virtual Network Gateway), set a custom threshold of 900 Mbps, evaluate it over a 10-minute window, and trigger alerts to technical support. This meets all the requirements of the question.
Azure Network Watcher provides diagnostic tools but lacks native alerting for bandwidth thresholds. NIPING is an SAP-specific latency tool, not a monitoring solution. Azure Enhanced Monitoring for SAP focuses on SAP application metrics, not VPN bandwidth.
For the AZ-120 exam, Azure Monitor is the standard Azure service for monitoring and alerting in SAP-on-Azure deployments, making it the closest and most correct answer.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an SAP production landscape on-premises and an SAP development landscape on Azure.
You deploy a network virtual appliance to act as a firewall between the Azure subnets and the on-premises network.
You need to ensure that all traffic is routed through the network virtual appliance.
Solution: You create an Azure Traffic Manager profile.
Does this meet the goal?
Yes
No
The correct answer is: No
Explanation:
The goal is to ensure that all traffic between the Azure subnets and the on-premises network is routed through the network virtual appliance (NVA), which is acting as a firewall. Let’s evaluate the proposed solution: creating an Azure Traffic Manager profile.
Azure Traffic Manager: This is a DNS-based traffic routing service that distributes traffic across multiple endpoints (e.g., Azure regions, external endpoints) based on policies like performance, geographic location, or priority. It operates at the application layer (Layer 7) and is primarily used for load balancing and failover scenarios across globally distributed endpoints. It does not control or route network traffic at the subnet or infrastructure level between Azure and an on-premises network through an NVA.
Requirement Analysis: To force all traffic through an NVA acting as a firewall, you need a solution that operates at the network layer (Layer 3/4) and can enforce routing rules. In Azure, this is typically achieved using User-Defined Routes (UDRs) in a route table. UDRs allow you to override Azure’s default system routes and direct traffic from one subnet (e.g., Azure subnets hosting the SAP development landscape) to the NVA’s IP address before it reaches the on-premises network via a VPN or ExpressRoute connection. Traffic Manager does not provide this capability.
Why “No” is correct:
Creating an Azure Traffic Manager profile does not address the requirement of routing all traffic through the NVA. Traffic Manager is designed for endpoint management and load balancing, not for enforcing network-level routing or firewall policies between subnets and on-premises networks.
In the context of the AZ-120 exam (focused on SAP on Azure), ensuring secure and controlled traffic flow between an on-premises SAP production landscape and an Azure-based SAP development landscape often involves network appliances like NVAs. The correct approach would involve configuring UDRs in a route table associated with the Azure subnets to point traffic to the NVA, not using Traffic Manager.
Correct Solution (for context):
While the question only asks about the proposed solution, the correct approach would be to:
Deploy the NVA in a dedicated subnet.
Create a route table with UDRs that set the next hop to the NVA’s IP address for traffic destined to the on-premises network or other Azure subnets.
Associate the route table with the relevant Azure subnets hosting the SAP development landscape.
Thus, the solution “Create an Azure Traffic Manager profile” does not meet the goal, making No the correct answer.
You plan to migrate an on-premises SAP environment to Azure.
You need to identity whether any SAP application servers host multiple SAP system identifiers (SlDs).
What should you do?
Run SAP HAN A sizing report.
From the SAP EarlyWatch Alert report, compare the physical host names to the virtual host names.
Run the SAP Report from ABAPMeter.
From the SAP EarlyWatch Alert report, compare the services to the reference objects
Why “From the SAP EarlyWatch Alert report, compare the physical host names to the virtual host names” is correct:
In the context of the AZ-120 exam (Planning and Administering Microsoft Azure for SAP Workloads), understanding the existing SAP landscape is critical before migration. The EWA report is a commonly used tool in SAP environments to gather system configuration details. Comparing physical and virtual hostnames in the report is a practical method to detect if a single server (physical hostname) is running multiple SAP instances (each potentially with a unique SID, tied to virtual hostnames). This aligns with SAP best practices for landscape discovery and Azure migration planning.
For example, in the EWA report, the “System Configuration” or “Host Overview” section might show a physical server “SAPHOST01” with virtual hostnames “PRD_VHOST” (SID: PRD) and “DEV_VHOST” (SID: DEV), indicating multiple SIDs on one server.
Thus, the correct answer is: From the SAP EarlyWatch Alert report, compare the physical host names to the virtual host names.
HOTSPOT
You have ah SAP environment on Azure that contains a single-tenant SAP NANA server at instance 03.
You need to monitor the network throughput from an SAP application server to the SAP HANA server.
How should you complete the script? To answer, select the appropriate options in the answer are. NOTE: Each correct selection is worth one point.
Answer Area
Answer Area
$HANA = [Dropdown]
Get-AzNetworkInterface
Get-AzNetworkUsage
Get-AzNetworkWatcher
Get-AzVM
-Name HANA01-NIC -ResourceGroupName Production
$APP = Get-AzVM -Name AppP01 -ResourceGroupName Production
New-AzNetworkWatcherConnectionMonitor -NetworkWatcher (Get-AzNetworkWatcher)
-Name HANA -DestinationAddress (($HANA).IpConfigurations.PrivateIPAddress)
-DestinationPort [Dropdown]
1433
1434
30115
30315
-SourceResourceId $APP.Id
Correct Answer: Get-AzNetworkInterface
Reasoning:
The variable $HANA is later used in the script to access the HANA server’s private IP address via $HANA.IpConfigurations.PrivateIPAddress. This indicates that $HANA must represent the network interface of the HANA server (HANA01-NIC), as the IpConfigurations property is part of the PSNetworkInterface object returned by Get-AzNetworkInterface.
Get-AzNetworkInterface retrieves the properties of a network interface (NIC) in Azure, including its IP configuration, which is exactly what’s needed here to specify the destination IP address for the connection monitor.
Other options:
Get-AzNetworkUsage: This does not exist as a valid Azure PowerShell cmdlet.
Get-AzNetworkWatcher: Retrieves a Network Watcher resource, not a NIC or IP address.
Get-AzVM: Retrieves a virtual machine object, which could work indirectly (via its NIC), but the script explicitly uses -Name HANA01-NIC, matching the naming convention of a NIC, not a VM. Using Get-AzVM wouldn’t directly provide the IpConfigurations property in this context without additional steps.
Thus, $HANA = Get-AzNetworkInterface -Name HANA01-NIC -ResourceGroupName Production is correct.
Second Dropdown: -DestinationPort [Dropdown]
Options:
1433
1434
30115
30315
Correct Answer: 30315
Reasoning:
The script is monitoring network throughput to an SAP HANA server running as instance 03 (single-tenant). In SAP HANA, the port used for communication depends on the instance number and the type of connection. The standard port convention for SAP HANA SQL access (via JDBC/ODBC) is 3<instance>15, where <instance> is the two-digit instance number.
For instance 03:
The port is calculated as 3<03>15 = 30315.
This port is used by the SAP HANA database for client connections from the SAP application server (e.g., ABAP or Java stack) to the HANA database’s SQL interface.
Other options:
1433: Default port for Microsoft SQL Server, not SAP HANA.
1434: SQL Server Browser Service port, unrelated to SAP HANA.
30115: This would correspond to HANA instance 01 (3<01>15), not instance 03.</instance></instance>
DRAG DROP
Your on-premises network contains an Active Directory domain.
You are deploying a new SAP environment on Azure.
You need to configure SAP Single Sign-On to ensure that users can authenticate lo SAP GUI and SAP WebGUI.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Actions Answer Area
Deploy Azure Active Directory Domain Services (Azure AD DS) and sync back to Active Directory.
Create an Azure Key Vault service endpoint.
Configure secure network communication (SNC) by using SNCWIZARD.
Change and deploy the logon file.
Change the user profiles for secure network communication (SNC.)
Final Answer Area:
Answer Area:
1. Deploy Azure Active Directory Domain Services (Azure AD DS) and sync back to Active Directory.
2. Configure secure network communication (SNC) by using SNCWIZARD.
3. Change the user profiles for secure network communication (SNC).
4. Change and deploy the logon file.
Correct Four Actions and Sequence:
The question asks for four actions, and we must exclude one. Since Create an Azure Key Vault service endpoint is not directly required for Kerberos-based SSO with AD (it’s more relevant for certificate-based scenarios or additional security), we’ll exclude it. The remaining four actions form a logical sequence:
Deploy Azure Active Directory Domain Services (Azure AD DS) and sync back to Active Directory
Why: This sets up the domain services in Azure, syncing with on-premises AD to provide Kerberos authentication. It’s the prerequisite for SAP SSO.
Order: First, as the identity foundation must be in place.
Configure secure network communication (SNC) by using SNCWIZARD
Why: SNCWIZARD configures the SAP system to use Kerberos via Azure AD DS for SSO. This integrates the SAP environment with the domain.
Order: Second, after Azure AD DS is available.
Change the user profiles for secure network communication (SNC)
Why: User profiles in SAP must be updated with SNC names (e.g., p:CN=username@domain.com) to map AD identities to SAP users. This ensures SSO works for each user.
Order: Third, after SNC is configured on the server side, user-specific settings are applied.
Change and deploy the logon file
Why: The SAP GUI client needs updated configuration (e.g., enabling SNC in the logon settings) to authenticate users without manual credentials. Deploying this ensures end-users can use SSO.
Order: Fourth, as this is the final client-side step after the SAP system and users are configured.
You plan to deploy SAP application servers that run Windows Server 2016.
You need to use PowerShell Desired State Configuration (DSC) to configure the SAP application server once the servers are deployed.
Which Azure virtual machine extension should you install on the servers?
the Azure DSC VM Extension
the Azure virtual machine extension
the Azure Chef extension
the Azure Enhanced Monitoring Extension for SAP
Correct Answer: The Azure DSC VM Extension
Why It’s Correct:
The Azure DSC VM Extension enables PowerShell DSC on Azure VMs, allowing you to automate the configuration of SAP application servers running Windows Server 2016. It directly supports the requirement to “use PowerShell Desired State Configuration” by executing DSC scripts post-deployment.
The other options either don’t exist (“Azure virtual machine extension”), use a different configuration tool (“Azure Chef extension”), or serve a monitoring purpose (“Azure Enhanced Monitoring Extension for SAP”), making them irrelevant to the task.
For the AZ-120 exam, the Azure DSC VM Extension is the standard solution for DSC-based configuration in Azure, aligning with SAP deployment automation best practices.
Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Overview
Contoso, Ltd. is a manufacturing company that has 15,000 employees.
The company uses SAP for sales and manufacturing.
Contoso has sales offices in New York and London and manufacturing facilities in Boston and Seattle.
Existing Environment
Active Directory
The network contains an on-premises Active Directory domain named ad.contoso.com. User email addresses use a domain name of contoso.com.
SAP Environment
The current SAP environment contains the following components:
- SAP Solution Manager
- SAP ERP Central Component (SAP ECC)
- SAP Supply Chain Management (SAP SCM)
- SAP application servers that run Windows Server 2008 R2
- SAP HANA database servers that run SUSE Linux Enterprise Server 12 (SLES 12)
Problem Statements
Contoso identifies the following issues in its current environment:
- The SAP HANA environment lacks adequate resources.
- The Windows servers are nearing the end of support.
- The datacenters are at maximum capacity.
Requirements
Planned Changes
Contoso identifies the following planned changes:
- Deploy Azure Virtual WAN.
- Migrate the application servers to Windows Server 2016.
- Deploy ExpressRoute connections to all of the offices and manufacturing facilities.
- Deploy SAP landscapes to Azure for development, quality assurance, and production.
All resources for the production landscape will be in a resource group named SAP Production.
Business goals
Contoso identifies the following business goals:
- Minimize costs whenever possible.
- Migrate SAP to Azure without causing downtime.
- Ensure that all SAP deployments to Azure are supported by SAP.
- Ensure that all the production databases can withstand the failure of an Azure region.
- Ensure that all the production application servers can restore daily backups from the last 21 days.
Technical Requirements
Contoso identifies the following technical requirements:
- Inspect all web queries.
- Deploy an SAP HANA cluster to two datacenters.
- Minimize the bandwidth used for database synchronization.
- Use Active Directory accounts to administer Azure resources.
- Ensure that each production application server has four 1-TB data disks.
- Ensure that an application server can be restored from a backup created during the last five days within 15 minutes.
- Implement an approval process to ensure that an SAP administrator is notified before another administrator attempts to make changes to the Azure virtual machines that host SAP.
It is estimated that during the migration, the bandwidth required between Azure and the New York office will be 1 Gbps. After the migration, a traffic burst of up to 3 Gbps will occur.
Proposed Backup Policy
An Azure administrator proposes the backup policy shown in the following exhibit.
Policy name:
✅ SapPolicy
Backup schedule
Frequency: Daily
Time: 3:30 AM
Timezone: (UTC) Coordinated Universal Time
Instant Restore
Retain instant recovery snapshot(s) for 5 Day(s)
Retention range
✅ Retention of daily backup point
At: 3:30 AM
For: 14 Day(s)
✅ Retention of weekly backup point
On: Sunday
At: 3:30 AM
For: 8 Week(s)
✅ Retention of monthly backup point
Week Based - Day Based
On: First Sunday
At: 3:30 AM
For: 12 Month(s)
✅ Retention of yearly backup point
Week Based - Day Based
In: January
On: First Sunday
At: 3:30 AM
For: 7 Year(s)
An Azure administrator provides you with the Azure Resource Manager template that will be used to provision the production application servers.
{
“apiVersion”: “2017-03-30”,
“type”: “Microsoft.Compute/virtualMachines”,
“name”: “[parameters(‘vmname’)]”,
“location”: “EastUS”,
“dependsOn”: [
“[resourceId(‘Microsoft.Network/networkInterfaces/’, parameters(‘vmname’))]”
],
“properties”: {
“hardwareProfile”: {
“vmSize”: “[parameters(‘vmSize’)]”
},
“osProfile”: {
“computerName”: “[parameters(‘vmname’)]”,
“adminUsername”: “[parameters(‘adminUsername’)]”,
“adminPassword”: “[parameters(‘adminPassword’)]”
},
“storageProfile”: {
“ImageReference”: {
“publisher”: “MicrosoftWindowsServer”,
“offer”: “WindowsServer”,
“sku”: “2016-datacenter”,
“version”: “latest”
},
“osDisk”: {
“name”: “[concat(parameters(‘vmname’), ‘-OS’)]”,
“caching”: “ReadWrite”,
“createOption”: “FromImage”,
“diskSizeGB”: 128,
“managedDisk”: {
“storageAccountType”: “[parameters(‘storageAccountType’)]”
}
},
“copy”: [
{
“name”: “DataDisks”,
“count”: “[parameters(‘diskCount’)]”,
“input”: {
“caching”: “None”,
“diskSizeGB”: 1024,
“lun”: “[copyIndex(‘datadisks’)]”
}
}
]
}
}
}
This question requires that you evaluate the underlined BOLD text to determine if it is correct.
You are planning for the administration of resources in Azure.
To meet the technical requirements, you must first implement Active Directory Federation Services (AD FS).
Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct.
No change is needed
Azure AD Connect
Azure AD join
Enterprise State Roaming
Correct Answer: Azure AD Connect
Why It’s Correct:
The technical requirement “Use Active Directory accounts to administer Azure resources” is best met by synchronizing on-premises AD with Azure AD using Azure AD Connect, allowing AD users to authenticate and manage Azure resources via RBAC.
AD FS (the underlined text) is a valid but unnecessarily complex alternative, requiring additional infrastructure without clear justification in the case study. Azure AD Connect is the standard, efficient solution for this scenario.
For the AZ-120 exam, Azure AD Connect is the expected answer for hybrid identity in SAP-on-Azure environments unless federation-specific needs are specified, making it the correct choice to replace the underlined text.
You are planning the Azure network infrastructure to support the disaster recovery requirements.
What is the minimum number of virtual networks required for the SAP deployed?
1
2
3
4
Correct Answer: 2
Why It’s Correct:
The minimum number of VNets required is 2: one for the primary region (hosting the production SAP HANA databases and application servers) and one for the DR region (hosting the replicated HANA databases and standby app servers). This satisfies the technical requirement to “ensure that all the production databases can withstand the failure of an Azure region” via SAP HANA system replication across regions.
A single VNet (1) can’t span regions, while 3 or 4 VNets exceed the minimum needed for production DR, especially given the business goal to “minimize costs whenever possible.”
For the AZ-120 exam, a two-VNet setup is the simplest, SAP-supported architecture for region-level DR, making 2 the correct answer.
Which Azure service should you deploy for the approval process to meet the technical requirements?
Just in time (JIT) VM access
Azure Active Directory (Azure AD) Identity Protection
Azure Active Directory (Azure AD) Privileged identity Manager (PIM)
Azure Active Directory (Azure AD) conditional access
Correct Answer: Azure Active Directory (Azure AD) Privileged Identity Manager (PIM)
Why It’s Correct:
Azure AD PIM implements an approval process for privileged actions, ensuring an SAP administrator is notified and must approve before another administrator can make changes to Azure VMs hosting SAP. It directly addresses the technical requirement by controlling RBAC permissions, which govern all VM modifications (not just login access).
JIT VM Access is limited to VM port access, not broader changes, while Identity Protection and Conditional Access lack approval workflows. PIM’s scope and notification capabilities make it the most precise match.
For the AZ-120 exam, PIM is the expected solution for privileged access management in SAP-on-Azure deployments, making it the correct choice.
HOTSPOT
You are planning replication of the SAP HANA database for the disaster recovery environment in Azure.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Answer Area
Statements Yes No
You must use synchronous replication. ( ) ( )
You must use delta data shipping for operation mode. ( ) ( )
You must configure an Azure Directory (Azure AD)
application to manage the failover. ( ) ( )
Correct Answers:
You must use synchronous replication: No
You must use delta data shipping for operation mode: No
You must configure an Azure Active Directory (Azure AD) application to manage the failover: Yes
Statement 1: “You must use synchronous replication.”
Evaluation:
Synchronous replication ensures zero data loss (RPO = 0) by writing data to both primary and secondary sites before committing transactions. However, it’s latency-sensitive and practical only for short distances (e.g., within a region or nearby availability zones).
The requirement “withstand the failure of an Azure region” implies replication across regions (e.g., hundreds or thousands of kilometers apart), where latency exceeds synchronous replication’s feasibility (typically <1-2 ms round-trip time).
SAP and Azure recommend asynchronous replication for cross-region DR to balance performance and distance, accepting a small RPO. The case study’s “minimize bandwidth” goal further supports asynchronous replication, as it uses less continuous bandwidth than synchronous.
Synchronous replication isn’t mandatory; asynchronous is a valid, supported option.
Answer: No
Why: Synchronous replication isn’t required for cross-region DR; asynchronous replication meets the region-failure requirement and aligns with bandwidth minimization.
Statement 2: “You must use delta data shipping for operation mode.”
Evaluation:
SAP HANA system replication supports multiple operation modes:
Delta data shipping: Periodically sends changed data blocks (e.g., every 10 minutes by default), reducing bandwidth compared to full sync.
Continuous log replay: Replicates transaction logs in near-real-time, offering lower RPO but higher bandwidth usage.
Full sync: Initial sync of all data, not an ongoing mode.
The requirement “minimize the bandwidth used for database synchronization” favors delta data shipping, as it sends only changes periodically rather than continuous log streams.
However, “must use” implies it’s the only option. While delta data shipping is advantageous here, SAP HANA also supports log replay for DR, and the choice depends on RPO/RTO needs (not specified beyond region failure). Log replay could be used if lower RPO is prioritized over bandwidth.
Given the bandwidth minimization requirement, delta data shipping is likely intended, but it’s not strictly mandatory—other modes are technically viable.
Answer: No (with caveat)
Why: Delta data shipping aligns with bandwidth minimization, but “must” overstates it; log replay is also an option. However, in AZ-120 exam context, “Yes” could be intended if delta is the expected best practice here. I’ll lean No for precision, as it’s not the only mode.
Statement 3: “You must configure an Azure Active Directory (Azure AD) application to manage the failover.”
Evaluation:
Yes
HOTSPOT
You need to provide the Azure administrator with the values to complete the Azure Resource Manager template.
Which values should you provide for diskCount, StorageAccountType, and domainName? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
diskCount:
0
1
2
4
storageAccountType:
Premium_LRS
Standard_GRS
Standard_LRS
domainName:
ad.contoso.com
ad.contoso.onmicrosoft.com
contoso.com
contoso.onmicrosoft.com
Box 1: 4
Scenario: the Azure Resource Manager template that will be used to provision the production application servers.
Ensure that each production application server has four 1-TB data disks.
Box 2: Standard_LRS
Scenario: Minimize costs whenever possible.
Box 3: contoso.onmicrosoft.com
The network contains an on-premises Active Directory domain named ad.contoso.com.
The Initial domain: The default domain (onmicrosoft.com) in the Azure AD Tenant. For example, contoso.onmicrosoft.com.
HOTSPOT
Before putting the SAP environment on Azure into production, which command should you run to ensure that the virtual machine disks meet the business requirements? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Get-AzDisk
Get-AzVM
Get-AzVMImage
-resourcegroupname “SAPProduction” | Where {$_.Sku.Name -ne “
Premium_LRS
Standard_LRS
Standard_RAGRS
StandardSSD_LRS
Correct Answers:
Cmdlet: Get-AzDisk
Storage Type: Premium_LRS
Why They’re Correct:
Get-AzDisk:
This cmdlet retrieves all managed disks in the SAPProduction resource group, allowing a direct check of each disk’s Sku.Name (storage type). It’s the most efficient way to verify disk configurations before production, ensuring compliance with SAP performance needs (implicit in “business requirements”).
Get-AzVM could work but is less focused (VM-level), and Get-AzVMImage is irrelevant (image-level).
AZ-120 tests practical Azure administration for SAP, and Get-AzDisk aligns with disk validation tasks.
Premium_LRS:
Premium SSD (Premium_LRS) is the Azure-recommended storage type for SAP production application servers and HANA databases due to its high IOPS and low latency, critical for performance. The four 1-TB data disks and OS disk should use this type to meet SAP standards, despite the “minimize costs” goal (performance is prioritized in production).
Other options (Standard_LRS, Standard_RAGRS, StandardSSD_LRS) don’t meet SAP production requirements.
The -ne “Premium_LRS” filter ensures all disks are Premium_LRS by flagging exceptions.
Command: Get-AzDisk -resourcegroupname “SAPProduction” | Where {$_.Sku.Name -ne “Premium_LRS”}
Run this before production to confirm no disks deviate from the required type. An empty result means compliance.
Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Overview
Litware, Inc. is an international manufacturing company that has 3,000 employees.
Litware has two main offices. The offices are located in Miami, FL, and Madrid, Spain.
Existing Environment
Infrastructure
Litware currently uses a third-party provider to host a datacenter in Miami and a disaster recovery datacenter in Chicago, IL.
The network contains an Active Directory domain named litware.com. Litware has two third-party applications hosted in Azure.
Litware already implemented a site-to-site VPN connection between the on-premises network and Azure.
SAP Environment
Litware currently runs the following SAP products:
- Enhancement Pack6 for SAP ERP Central Component 6.0 (SAP ECC 6.0)
- SAP Extended Warehouse Management (SAP EWM)
- SAP Supply Chain Management (SAP SCM)
- SAP NetWeaver Process Integration (PI)
- SAP Business Warehouse (SAP BW)
- SAP Solution Manager
All servers run on the Windows Server platform. All databases use Microsoft SQL Server. Currently, you have 20 production servers.
You have 30 non-production servers including five testing servers, five development servers, five quality assurance (QA) servers, and 15 pre-production servers.
Currently, all SAP applications are in the litware.com domain.
Problem Statements
The current version of SAP ECC has a transaction that, when run in batches overnight, takes eight hours to complete. You confirm that upgrading to SAP Business Suite on HANA will improve performance because of code changes and the SAP HANA database platform.
Litware is dissatisfied with the performance of its current hosted infrastructure vendor. Litware experienced several hardware failures and the vendor struggled to adequately support its 24/7 business operations.
Requirements
Business Goals
Litware identifies the following business goals:
- Increase the performance of SAP ECC applications by moving to SAP HANA. All other SAP databases will remain on SQL Server.
- Move away from the current infrastructure vendor to increase the stability and availability of the SAP services.
- Use the new Environment, Health and Safety (EH&S) in Recipe Management function.
- Ensure that any migration activities can be completed within a 48-hour period during a weekend.
Planned Changes
Litware identifies the following planned changes:
- Migrate SAP to Azure.
- Upgrade and migrate SAP ECC to SAP Business Suite on HANA Enhancement Pack 8.
Technical Requirements
Litware identifies the following technical requirements:
- Implement automated backups.
- Support load testing during the migration.
- Identify opportunities to reduce costs during the migration.
- Continue to use the litware.com domain for all SAP landscapes.
- Ensure that all SAP applications and databases are highly available.
- Establish an automated monitoring solution to avoid unplanned outages.
- Remove all SAP components from the on-premises network once the migration is complete.
- Minimize the purchase of additional SAP licenses. SAP HANA licenses were already purchased.
- Ensure that SAP can provide technical support for all the SAP landscapes deployed to Azure.
HOTSPOT
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Answer Area
After the migration, you can use Azure Site Recovery to back up the SAP HANA databases. ( ) ( )
After the migration, you can use SAP HANA Cockpit to back up the SAP ECC databases. ( ) ( )
After the migration, you can use SAP HANA Cockpit to back up SAP BW. ( ) ( )
Correct Answers:
After the migration, you can use Azure Site Recovery to back up the SAP HANA databases: No
After the migration, you can use SAP HANA Cockpit to back up the SAP ECC databases: Yes
After the migration, you can use SAP HANA Cockpit to back up SAP BW: No
Why They’re Correct:
ASR for SAP HANA (No):
ASR replicates VMs for DR, not database backups. SAP HANA requires database-specific backup tools (e.g., HANA Cockpit), making ASR unsuitable. AZ-120 expects understanding of backup vs. replication distinctions.
HANA Cockpit for ECC (Yes):
Post-migration, ECC’s database is SAP HANA, and HANA Cockpit is a native, supported tool for HANA backups. This aligns with the “automated backups” requirement and AZ-120’s focus on SAP HANA management.
HANA Cockpit for BW (No):
SAP BW stays on SQL Server, and HANA Cockpit is HANA-specific. This tests knowledge of database platforms in mixed SAP environments, a key AZ-120 concept.
You are evaluating which migration method Litware can implement based on the current environment and the business goals.
Which migration method will cause the least amount of downtime?
Use the Database migration Option (DMO) to migrate to SAP HANA and Azure During the same maintenance window.
Use Near-Zero Downtime (NZDT) to migrate to SAP HANA and Azure during the same maintenance window.
Migrate SAP to Azure, and then migrate SAP ECC to SAP Business Suite on HAN
Migrate SAP ECC to SAP Business Suite on HANA an the migrate SAP to Azure.
Correct Answer: Use Near-Zero Downtime (NZDT) to migrate to SAP HANA and Azure during the same maintenance window
Why It’s Correct:
Least Downtime:
NZDT minimizes downtime to the smallest window (e.g., 1-4 hours) by pre-replicating data to SAP HANA on Azure while the source system (SQL Server on-premises) remains operational. The final cutover is quick, far less than DMO’s single-step downtime (8-24 hours) or the cumulative downtime of two-step approaches (12-36 hours).
The question prioritizes “least amount of downtime,” and NZDT is explicitly designed for this, outperforming DMO and multi-step options.
Case Study Alignment:
Fits the 48-hour weekend window (a constraint NZDT easily meets).
Supports the goal of moving to SAP HANA and Azure, improving performance and stability.
High availability is enhanced by minimizing disruption during migration.
Comparison:
DMO: Single window but longer downtime (e.g., 8-24 hours), not “least.”
Two-Step Options (3 & 4): Double downtime across separate windows, clearly more than NZDT or DMO.
NZDT’s replication approach (e.g., via SAP LT or similar) ensures the least interruption.
You need to ensure that you can receive technical support to meet the technical requirements.
What should you deploy to Azure?
SAP Landscape Management (LaMa)
SAP Gateway
SAP Web Dispatcher
SAPRouter
Correct Answer: SAPRouter
Why It’s Correct:
SAPRouter is the tool specifically designed to enable SAP technical support by establishing a secure connection between Litware’s Azure-hosted SAP systems and SAP’s support infrastructure. It meets the technical requirement by allowing SAP to access and troubleshoot the landscapes (ECC on HANA, EWM, SCM, PI, BW) remotely.
SAP LaMa, SAP Gateway, and SAP Web Dispatcher serve operational, integration, or performance purposes, respectively, but none facilitate SAP support access.
For the AZ-120 exam, SAPRouter is the expected solution for ensuring SAP supportability in Azure, aligning with real-world SAP-on-Azure deployments.
HOTSPOT
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Statements Yes No
After the migration, all user authentication to the SAP applications must be handled by Azure Active Directory (Azure AD). ( ) ( )
The migration requires that the on-premises Active Directory domain syncs to
Azure Active Directory (Azure AD). ( ) ( )
After the migration users will be able to authenticate to the SAP applications by using their existing credentials in litware.com. ( ) ( )
Final Answers
Statement 1: No
Statement 2: No
Statement 3: Yes
Why These Are Correct
Statement 1 (No): The absolute phrasing (“must”) doesn’t reflect the flexibility of SAP authentication options on Azure. AZ-120 emphasizes understanding integration possibilities, not mandates.
Statement 2 (No): Migration to Azure focuses on infrastructure and application lift-and-shift or re-platforming; AD syncing is an optional identity step, not a migration requirement.
Statement 3 (Yes): This reflects a typical hybrid identity outcome in SAP-on-Azure deployments, where existing credentials are preserved via AD sync or domain extension, a key AZ-120 concept.
You need to recommend a solution to reduce the cost of the SAP non-production landscapes after the migration.
What should you include in the recommendation?
Deallocate virtual machines when not In use.
Migrate the SQL Server databases to Azure SQL Data Warehouse.
Configure scaling of Azure App Service.
Deploy non-production landscapes to Azure Devlest Labs.
Final Recommendation and Reasoning
Deploy non-production landscapes to Azure DevTest Labs
Reason: While deallocating VMs is a valid cost-saving tactic, Azure DevTest Labs encompasses this capability (via auto-shutdown policies) and adds additional cost-saving and management features tailored to non-production environments. It provides a holistic solution for SAP non-production landscapes by enabling efficient resource provisioning, policy enforcement, and cost tracking, which are critical for managing SAP workloads on Azure. This makes it a more comprehensive and strategic recommendation compared to simply deallocating VMs. The other two options (Azure SQL Data Warehouse and Azure App Service) are not applicable to SAP landscapes, as explained.
Litware is evaluating whether to add high availability after the migration?
What should you recommend to meet the technical requirements?
SAP HANA system replication and Azure Availability Sets
Azure virtual machine auto-restart with SAP HANA service auto-restart.
Azure Site Recovery
Recommendation:
SAP HANA System Replication and Azure Availability Sets is the correct answer.
Why It’s Correct:
Meets HA Goals: This option provides true high availability by combining SAP HANA’s data replication (ensuring database consistency and failover capability) with Azure Availability Sets (protecting against VM-level hardware failures). It minimizes downtime and ensures service continuity within a region, which aligns with typical HA requirements for SAP workloads post-migration.
Azure AZ-120 Context: The exam emphasizes understanding SAP HANA high availability options on Azure, including system replication and infrastructure features like Availability Sets or Zones. This combination is a standard recommendation for SAP HANA HA in Azure documentation and aligns with best practices for production landscapes.
Comparison to Alternatives:
VM Auto-Restart: Too basic; it doesn’t provide redundancy or data replication, failing to meet robust HA standards.
Azure Site Recovery: Focuses on DR across regions, not HA within a region, making it less suitable unless the question explicitly mentions regional failover needs (which it doesn’t).
You are evaluating the migration plan.
Licensing for which SAP product can be affected by changing the size of the virtual machines?
SAP Solution Manager
PI
SAP SCM
SAP ECC
The correct answer is SAP ECC.
Explanation:
When evaluating the migration of SAP workloads to Azure, as part of the AZ-120 exam context, the licensing for SAP ERP Central Component (SAP ECC) can be affected by changing the size of virtual machines. SAP ECC is a core SAP product that often relies on SAP HANA or other database systems for performance optimization in modern deployments. Licensing for SAP ECC is typically tied to the system’s performance capacity, which is measured in SAP Application Performance Standard (SAPS). The SAPS rating is directly influenced by the compute resources (e.g., CPU and memory) allocated to the virtual machines. Changing the size of the virtual machines (e.g., increasing or decreasing vCPUs or RAM) alters the SAPS capacity, which can impact the licensing costs or requirements for SAP ECC.
opic 2, Misc. Questions
HOTSPOT
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
Answer Area
You must split data files and database logs between different Azure virtual disks to
increase the database read/write performance. ( ) ( )
Enabling Accelerate Networking on virtual NICs for all SAP servers will reduce
network latency between the servers. ( ) ( )
When you use SAP HANA on Azure (Large Instances), you should set the MTU on the
primary network interface to match the MTU on SAP application servers to reduce
CPU utilization and network latency. ( ) ( )
Final Answers
Statements Yes No
You must split data files and database logs between different Azure virtual disks to increase the database read/write performance. Yes
Enabling Accelerated Networking on virtual NICs for all SAP servers will reduce network latency between the servers. Yes
When you use SAP HANA on Azure (Large Instances), you should set the MTU on the primary network interface to match the MTU on SAP application servers to reduce CPU utilization and network latency. Yes
Reasoning Summary
Statement 1: Splitting data and logs across disks is a standard practice to boost I/O performance for SAP databases, making “Yes” correct.
Statement 2: Accelerated Networking reduces latency by optimizing network traffic, a clear benefit for SAP server communication, so “Yes” is correct.
Statement 3: Matching MTU settings between HLI and application servers minimizes fragmentation and improves efficiency, supporting “Yes” as the correct answer.
DRAG DROP
You deploy an SAP environment on Azure.
You need to grant an SAP administrator read-only access to the Azure subscription. The SAP administrator must be prevented from viewing network information.
How should you configure the role-based access control (RBAC) role definition? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
Values
———————————
“/read”
“Microsoft.Authorization//read”
“Microsoft.Compute//read”
“Microsoft.Insights//read”
“Microsoft.Management/managementGroups/read”
“Microsoft.Network//read”
“Microsoft.Resources//read”
“Microsoft.Storage/*/read”
{
“Name”: “CustomRole001”,
“IsCustom”: true,
“Description”: “”,
“Actions”: [ _______________________ ],
“NotActions”: [ _______________________ ],
“DataActions”: [],
“AssignableScopes”:
[“/subscriptions/0eaef253-d1ee-423e-a95a-418939ee14ae”]
}
Final Answer:
json
{
“Name”: “CustomRole001”,
“IsCustom”: true,
“Description”: “”,
“Actions”: [“/read”],
“NotActions”: [“Microsoft.Network//read”],
“DataActions”: [],
“AssignableScopes”: [“/subscriptions/0eaef253-d1ee-423e-a95a-418939ee14ae”]
}
Why This Is Correct:
Read-Only Access: “/read” in “Actions” provides read-only permissions across all Azure resources in the subscription, aligning with the SAP administrator’s need to monitor the SAP environment without modification rights. This is a common approach in AZ-120 scenarios for granting broad visibility.
Network Exclusion: “Microsoft.Network//read” in “NotActions” specifically denies access to network-related information, meeting the requirement to restrict this visibility. The “NotActions” field takes precedence over “Actions”, ensuring the exclusion is enforced.
AZ-120 Relevance: The exam tests understanding of RBAC customization for SAP workloads on Azure, including how to scope permissions and use “Actions” and “NotActions” to fine-tune access. This solution reflects best practices for creating least-privilege roles tailored to SAP administration.
Alternative Values: Granular permissions (e.g., “Microsoft.Compute//read”) could work but would require listing all relevant namespaces, which is impractical and error-prone. “/read” with a “NotActions” exclusion is more efficient and aligns with Azure’s RBAC design.
Correct Selections:
Actions: “/read”
NotActions: “Microsoft.Network//read”
You have an Azure virtual machine that runs SUSE Linux Enterpnse Server (SlES). The virtual machine hosts a highly available deployment of SAP HANA.
You need to validate whether Accelerated Networking is operational for the virtual machine.
What should you use?
fio
iometer
netsh
ethtool
Why ethtool?
ethtool is a Linux command-line utility specifically designed to display and configure network interface settings. On a Linux-based system like SLES, you can use ethtool to check the status of the NIC and verify whether features like Accelerated Networking are active. For example, commands like ethtool -i <interface> or ethtool -k <interface> can provide details about the driver and offload capabilities, which are indicative of Accelerated Networking being operational. Azure documentation for validating Accelerated Networking on Linux VMs often references ethtool to confirm the use of the hv_netvsc driver and SR-IOV support.</interface></interface>
You deploy an SAP environment on Azure.
You need to monitor the performance of the SAP NetWeaver environment by using Azure Extension for SAP.
What should you do first?
A. From Azure CLI, install the Linux Diagnostic Extension
B. From the Azure portal, enable the Custom Script Extension
C. From Azure CLI, run the az vm aem set command
D. From the Azure portal, enable the Azure Network Watcher Agent
Correct Answer
C. From Azure CLI, run the az vm aem set command
Why Correct?
The Azure Extension for SAP (Azure Enhanced Monitoring Extension) is a prerequisite for monitoring SAP NetWeaver environments on Azure. It integrates with the SAP Host Agent to collect detailed performance metrics (e.g., system resources, SAP-specific counters) and exposes them to Azure Monitor or SAP’s own monitoring tools. The az vm aem set command is the official and recommended method to deploy and configure this extension on an Azure VM, as outlined in Microsoft’s documentation for SAP workloads on Azure. This step must be performed first before any SAP-specific monitoring can take place, making it the correct initial action for the scenario described.