test0 Flashcards

(87 cards)

1
Q

You plan to deploy an SAP landscape on Azure that will use SAP HANA on Azure (Large Instances).

You need to ensure that outbound traffic from the application tier can flow only to the database tier.

What should you use?
A
application security groups
B
network security groups (NSGs)
C
Azure Firewall
D
network virtual appliances (NVAs)

A

The correct answer is: B. Network Security Groups (NSGs)
Why NSGs Are Correct:
SAP HANA on Azure (Large Instances) Context: In this architecture, the application tier runs on Azure VMs within a VNet, while the HANA Large Instances are connected via ExpressRoute or a similar network link. Microsoft documentation for SAP HANA Large Instances recommends using NSGs to secure traffic between tiers (e.g., application to database). NSGs are applied to the subnet hosting the application VMs to control outbound traffic to the HANA Large Instances subnet or IP range.
Requirement Fit: The question specifies that outbound traffic from the application tier must flow only to the database tier. NSGs allow you to:
Create an outbound rule allowing traffic from the application subnet to the HANA Large Instances IP range (e.g., TCP port 30315 for HANA instance 03).
Add a lower-priority “Deny All” rule to block all other outbound traffic.
AZ-120 Relevance: The exam emphasizes practical Azure networking solutions for SAP workloads. NSGs are a fundamental, cost-effective, and native Azure tool for securing SAP landscapes, making them the best fit here.
Example NSG Configuration:
Rule 1: Allow outbound from application subnet (e.g., 10.0.1.0/24) to HANA subnet (e.g., 10.0.2.0/24) on HANA ports (e.g., 30315). Priority: 100.
Rule 2: Deny all outbound traffic. Priority: 200.
This ensures that application tier traffic is restricted to the database tier only.

Final Answer:
B. Network Security Groups (NSGs)

Why: NSGs provide the simplest, most direct, and Azure-native method to enforce the traffic restriction, aligning with SAP on Azure best practices and the AZ-120 exam’s focus on practical deployment scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an Azure tenant and an SAP Cloud Platform tenant.

You need to ensure that users sign in automatically by using their Azure AD accounts when they connect to SAP Cloud Platform.

Which four actions should you perform in sequence? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order.
Actions Answer Area

[ ] Configure the SAML settings for the Identifier and Reply URL.
[ ] From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file.
[ ] From the Azure Active Directory admin center, configure the SAP Cloud Platform Identity app to use the Federation Metadata XML file.
[ ] From the Azure Active Directory admin center, download the Federation Metadata XML file.
[ ] From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app.

[ < ] [ > ] [ ↑ ] [ ↓ ]

A

Final Answer Area:

Answer Area:
1. From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app.
2. From the Azure Active Directory admin center, download the Federation Metadata XML file.
3. From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file.
4. Configure the SAML settings for the Identifier and Reply URL.

Correct Four Actions and Sequence:
The question requires four actions, and action 5 appears redundant or incorrect in this context. The standard SAML SSO setup process between Azure AD and SAP Cloud Platform IAS excludes action 5, focusing on registering the app, exchanging metadata, and configuring endpoints. The correct sequence is:

From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app.
Why: Registering the SAP app in Azure AD is the starting point for SSO configuration.
Order: First, as it creates the app object in Azure AD.
From the Azure Active Directory admin center, download the Federation Metadata XML file.
Why: The metadata file provides Azure AD’s SAML details (e.g., Sign-On URL, certificate) needed by SAP.
Order: Second, after the app is added, you can access and download this file from the SSO settings.
From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file.
Why: Upload the Azure AD metadata to SAP IAS to establish trust, enabling SAP to accept Azure AD authentication.
Order: Third, after obtaining the metadata file, configure SAP to use it.
Configure the SAML settings for the Identifier and Reply URL.
Why: In Azure AD, configure the SAP app’s SAML settings with the Identifier and Reply URL (obtained from SAP IAS) to complete the federation. This ensures SAML assertions are sent to SAP correctly.
Order: Fourth, as this finalizes the Azure AD side after SAP is configured, though in practice, you might need SAP values earlier (iterative process simplified here).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You plan to deploy an SAP production landscape on Azure.

You need to identify which virtual machine series to use for the SAP HANA role and the SAP Central Services (SCS) role. The solution must meet the following requirements:

  • Provide 384 GB of memory for the HANA role.
  • Support ultra disks for the HANA role.
  • Meet SAP certification.
  • Minimize costs.

Which virtual machine series should you identify for each role? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

HANA:
[ ▼ ]
A-Series
D-Series
E-Series

SCS:
[ ▼ ]
A-Series
B-Series
E-Series

A

Final Answer Area:

Answer Area

HANA: [E-Series]
SCS: [B-Series]
Why These Answers Are Correct for AZ-120:
HANA: E-Series:
E48s_v4 (384 GB RAM) matches the exact memory requirement, supports ultra disks, and is SAP-certified for HANA (per SAP HANA on Azure guidelines). It’s more cost-effective than M-Series (HANA Large Instances), aligning with “minimize costs” among certified options.
AZ-120 emphasizes selecting VMs from Azure’s SAP-certified list (e.g., SAP Note 2522080).
SCS: B-Series:
B-Series is certified for lightweight SAP components like ASCS/SCS, and its burstable nature keeps costs low for production use. SCS doesn’t need high memory or ultra disks, making B-Series (e.g., B2ms) sufficient and economical.
AZ-120 tests cost-effective sizing for SAP roles beyond HANA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an Azure subscription that contains the resources shown in the following table.
Name Type
RG1 Resource group
VM1 Virtual machine
corpsoftware Azure Storage account

You plan to deploy an SAP production landscape.

You create the following PowerShell Desired State Configuration (DSC) and publish the DSC configuration to corpsoftware.
Configuration JRE {

Import-DscResource -ModuleName xPSDesiredStateConfiguration
Package Installer
{
    Ensure = ‘Present’
    Name = “Java 8”
    Path = “\\File01\Software\JreInstall.exe”
    Arguments = “/x REBOOT=0 SPONSORS=0 REMOVEOUTOFDATEJRES=1 INSTALL_SILENT=1 AUTO_UPDATE=0 EULA=0”
    ProductId = “26242AE4-039D-4CA4-87B4-2F64180101F0”
} }

You need to deploy the DSC configuration to VM1.

How should you complete the command? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

-ResourceGroupName RG1 -VMName VM1 -ArchiveStorageAccountName corpsoftware -ArchiveBlobName ‘JREInstall.ps1.zip’

Import-AzAutomationDscConfiguration
Set-AZAutomationDSCNode
Set-AzVMDscExtension
Set-AzVMExtension

-AutoUpdate -ConfigurationName

Installer
Java 8
JRE
JREInstall

A

Final Answer Area:

Answer Area

-ResourceGroupName RG1 -VMName VM1 -ArchiveStorageAccountName corpsoftware -ArchiveBlobName ‘JREInstall.ps1.zip’

[Set-AzVMDscExtension]

-AutoUpdate -ConfigurationName

[JRE]
Why These Answers Are Correct for AZ-120:
Set-AzVMDscExtension: This cmdlet is the Azure-native way to deploy DSC to a VM, critical for SAP landscapes where components like Java (e.g., for SAP NetWeaver) must be installed consistently. The provided parameters match its syntax, and it’s practical for production deployments without Automation DSC.
JRE: The configuration name must match the DSC script’s definition (Configuration JRE), ensuring the Java 8 installation is applied to VM1. This aligns with DSC conventions and SAP dependency management in Azure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your on-premises network is connected to an SAP HANA deployment in the East US Azure region. The deployment uses the Standard SKU of an ExpressRoute gateway.

You need to implement ExpressRoute FastPath. The solution must meet the following requirements:

  • Hybrid connectivity must be maintained if a single datacentre fails in the East US region.
  • Hybrid connectivity costs must be minimized.

Which ExpressRoute gateway SKU should you use?
A
High Performance
B
ErGw3Az
C
Ultra Performance
D
ErGw1AZ

A

Final Answer:
D. ErGw1AZ

Why ErGw1AZ Is Correct:
FastPath: ErGw1AZ supports ExpressRoute FastPath, improving latency for SAP HANA traffic (e.g., from Azure VMs to on-premises).
High Availability: As an AZ SKU, it’s deployed across Availability Zones in East US (e.g., Zone 1, Zone 2), ensuring connectivity if one datacenter fails. This aligns with SAP HANA’s need for reliable hybrid connectivity in production.
Cost Minimization: ErGw1AZ is the cheapest zone-redundant SKU that supports FastPath. SAP HANA deployments (unless specified as Large Instances or massive scale) typically don’t require the higher throughput of ErGw2AZ or ErGw3AZ, making ErGw1AZ sufficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You have an on-premises SAP NetWeaver-based ABAP deployment hosted on servers that run Windows Server or Linux.

You plan to migrate the deployment to Azure.

What will invalidate the existing NetWeaver ABAP licenses for each operating system once the servers are migrated to Azure? To answer, drag the appropriate actions to the correct operating systems. Each action may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.
Actions

  • Changing the hostname assigned to the operating system
  • Deallocating the Azure virtual machine
  • Deleting the Azure virtual machine and recreating a new virtual machine that uses the same disks
  • Using the Redeploy option from the Azure portal of the Azure virtual machine
  • Replacing the primary NIC

Answer Area

Windows Server:
[Blank space]

[Blank space]

Linux:
[Blank space]

A

Final Answer Area:
Answer Area

Windows Server:
[Deleting the Azure virtual machine and recreating a new virtual machine that uses the same disks]
[Replacing the primary NIC]

Linux:
[Changing the hostname assigned to the operating system]
Why This Is Correct for AZ-120:
Windows Server:
Deleting and Recreating VM: In Azure, deleting a VM and recreating it generates a new VM ID, invalidating the hardware-bound license. SAP requires a new license key post-recreation, a key consideration in migration planning (AZ-120).
Replacing the primary NIC: While less definitive, SAP’s hardware key can include NIC details. Changing it may trigger a license mismatch, especially in strict configurations. With two slots, this is a reasonable second action per SAP’s conservative licensing model.
Linux:
Changing the hostname: SAP NetWeaver ABAP on Linux uses the hostname as the license anchor (e.g., via SAPSYSTEMNAME or slicense). Changing it post-migration invalidates the license, requiring a new key—a critical AZ-120 concept for SAP on Azure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have an Azure subscription that contains two SAP HANA on Azure (Large Instances) deployments named HLI1 and HLI2. HLI1 is deployed to the East US Azure region. HLI2 is deployed to the West US 2 Azure region.

You need to minimize network latency for inter-region communication between HLI1 and HLI2.

What should you implement?
A
a NAT gateway
B
IP routing tables
C
ExpressRoute FastPath
D
ExpressRoute Global Reach

A

Final Answer:
D. ExpressRoute Global Reach

Why ExpressRoute Global Reach Is Correct:
HANA Large Instances Networking: Each HLI unit is a bare-metal server in an Azure stamp, connected to a customer VNet via a dedicated ExpressRoute circuit (provided by Microsoft as part of the HLI service). Inter-region communication (e.g., East US to West US 2) defaults to Azure’s backbone, but without optimization, it may not be the most direct path.
Global Reach Functionality:
Links the ExpressRoute circuits of East US and West US 2.
Traffic flows over Microsoft’s private, high-speed global network (e.g., via peering locations), reducing hops and latency compared to standard inter-region routing.
Essential for scenarios like HANA System Replication (HSR) between regions, a common SAP HA/DR setup.
Latency Minimization: Global Reach provides the shortest, most predictable path between regions, critical for SAP HANA’s performance-sensitive workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You have an on-premises SAP landscape that uses DB2 databases and contains an SAP Financial Accounting (SAP FIN) deployment. The deployment contains a file share that stores 50 GB of bitmap files.

You plan to migrate the on-premises SAP landscape to SAP HANA on Azure and store the images on Azure Files shares. The solution must meet the following requirements:

  • Minimize costs.
  • Minimize downtime.
  • Minimize administrative effort.

You need to recommend a migration solution.

What should you recommend using to migrate the databases and to check the images? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

Migrate the databases:
[Dropdown menu]
- Azure Database Migration Service
- Data Migration Assistant (DMA)
- SAP Database Migration Option (DMO) with System Move

Check the images:
[Dropdown menu]
- AZCopy
- Azure DataBox
- Azure Migrate

A

Final Answer Area:
Answer Area

Migrate the databases:
[SAP Database Migration Option (DMO) with System Move]

Check the images:
[AZCopy]
Why These Answers Are Correct for AZ-120:
SAP DMO with System Move:
Tailored for SAP migrations (DB2 to HANA), a core AZ-120 focus.
Balances downtime, cost, and effort by leveraging SAP’s native tools and Azure’s infrastructure.
Matches SAP HANA on Azure migration guidelines (e.g., SAP Note 2522080).
AZCopy:
Efficient, cost-effective, and low-effort for small file transfers (50 GB) to Azure Files, a common storage choice for SAP landscapes.
Ensures file integrity post-migration, meeting operational needs with minimal overhead.
Aligns with Azure’s recommended tools for SAP file migrations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You are planning a small-scale deployment of an SAP HANA on Azure (Large Instances) landscape.

You identify the costs of the virtual machine SKU required to host the HANA Large Instances landscape.

Which additional costs will be incurred?
A
a Linux support contract
B
an ExpressRoute circuit between the HANA Large Instances stamp and Azure
C
a Site-to-Site VPN connection between the HANA Large Instances stamp and Azure
D
an Azure Rapid Response support contract

A

Final Answer:
A. A Linux support contract

Why “A Linux support contract” Is Correct:
HANA Large Instances Cost Breakdown:
Included: HLI SKU cost (hardware, management, ExpressRoute to VNet).
Customer Responsibility: OS licensing/support (e.g., SUSE or Red Hat for SAP HANA), SAP HANA licensing, and optional Azure support plans.
Linux Support: SAP HANA on Azure (Large Instances) runs on Linux (e.g., SLES or RHEL), and SAP requires an active support contract for the OS in production. This is an additional cost beyond the HLI SKU, as Microsoft doesn’t provide OS licensing or support—it’s the customer’s responsibility.
Requirements Fit:
Small-Scale: Doesn’t eliminate the need for OS support; even a single HLI unit requires it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Your on-premises network has a 100-Mbps internet connection and contains an SAP production landscape that has 14 TB of data files.

You plan to migrate the on-premises SAP landscape to Azure.

You need to migrate the data files to an Azure Files share. The solution must meet the following requirements:

  • Migrate the files within seven days.
  • Minimize administrative effort.
  • Minimize service outages.

What should you use?
A
Azure Migrate
B
AzCopy
C
Azure Data Box
D
Azure Site Recovery

A

Final Answer:
C. Azure Data Box

Why Azure Data Box Is Correct:
7-Day Timeline:
Online transfer (e.g., AzCopy) takes ~13 days at 100 Mbps, exceeding the limit.
Data Box timeline (order, copy, ship, upload) fits within 7 days:
Shipping: 2-3 days each way (assume expedited).
Local copy: 14 TB at 100 MB/s (LAN speed, not internet) = ~40 hours.
Azure upload: Fast internal transfer.
Total: ~5-7 days, achievable with planning.
Minimize Administrative Effort:
Azure manages shipping and upload to Azure Files. Customer only copies data locally (e.g., via SMB to Data Box) and returns the device—simpler than managing a multi-day online transfer.
Minimize Service Outages:
Offline transfer via Data Box uses local storage and shipping, avoiding the 100-Mbps internet link. No impact on production SAP traffic, unlike AzCopy.
Azure Files Fit: Data Box supports direct transfer to Azure Files shares, aligning with the target.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You are planning a deployment of SAP on Azure that will use SAP HANA.

You need to ensure that the SAP application servers are in the same datacenter as the HANA nodes.

What should you use?
A
an application group
B
a proximity placement group
C
a resource group
D
a virtual machine scale set

A

Final Answer:
B. A proximity placement group

Why Proximity Placement Group Is Correct:
SAP HANA Deployment:
If using HANA on Azure VMs (e.g., E-Series, M-Series), a PPG ensures the app servers and HANA VMs are in the same physical datacenter (e.g., same AZ or fault domain).
If using HANA Large Instances (bare-metal), app servers (VMs) in a PPG can be co-located with the HLI stamp’s Azure connectivity point, minimizing latency.
Latency Minimization: PPGs reduce network latency (e.g., <1 ms) by placing resources in the same physical infrastructure, critical for SAP HANA performance (SAP recommends app-to-DB latency <2 ms).
AZ-120 Context: The exam emphasizes optimizing SAP deployments on Azure. PPGs are explicitly recommended in Microsoft’s SAP HANA on Azure documentation (e.g., for HANA VMs or app server proximity to Large Instances) to ensure performance.
Requirements Fit: “Same datacenter” translates to physical proximity in Azure terms, and PPG is the tool designed for this purpose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have an Azure subscription.

You need to deploy multiple virtual machines that will host SAP HANA by using an Azure Resource Manager (ARM) template. The solution must meet SAP certification requirements.

How should you complete the template? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

{
“apiVersion”: “2017-06-01”,
“type”: “Microsoft.Network/networkInterfaces”,
“name”: “[parameters(‘vmName’)]”,
“location”: “[resourceGroup().location]”,
“properties”: {
[Dropdown menu] true,
“AuxiliaryMode”:
“enableAcceleratedNetworking”:
“enableIPForwarding”:

    "ipConfigurations": [
        ...
    ]
} }

{
“type”: “Microsoft.Compute/virtualMachines”,
“name”: “[parameters(‘vmName’)]”,
“location”: “[resourceGroup().location]”,
“properties”: {
“hardwareProfile”: {
“vmSize”: [Dropdown menu]
- “Standard_DS16_v4”
- “Standard_E16”
- “Standard_M64s”
}
}
}

A

Final Answer Area:
text

Collapse

Wrap

Copy
Answer Area

{
“apiVersion”: “2017-06-01”,
“type”: “Microsoft.Network/networkInterfaces”,
“name”: “[parameters(‘vmName’)]”,
“location”: “[resourceGroup().location]”,
“properties”: {
[enableAcceleratedNetworking] true,
“ipConfigurations”: [

]
}
}

{
“type”: “Microsoft.Compute/virtualMachines”,
“name”: “[parameters(‘vmName’)]”,
“location”: “[resourceGroup().location]”,
“properties”: {
“hardwareProfile”: {
“vmSize”: [Standard_M64s]
}
}
}
Why These Answers Are Correct for AZ-120:
“enableAcceleratedNetworking”: Ensures low-latency networking, a SAP HANA certification requirement for Azure VMs. It’s a standard ARM property for NICs supporting HANA’s performance needs.
“Standard_M64s”: A certified, high-memory VM size optimized for SAP HANA (per SAP and Azure guidelines), suitable for multi-VM deployments. It outperforms E16 and excludes uncertified DS16_v4.
Exam Focus: AZ-120 tests SAP-specific configurations in ARM templates, emphasizing certified VM sizes and networking optimizations like accelerated networking for HANA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have an on-premises SAP HANA scale-out system with standby node.

You plan to migrate the system to Azure.

You need to configure Azure compute and database resources for the system. The solution must meet the following requirements:

  • Support up to 20 TB of memory per node.
  • Run on non-shared hardware.

What should you use for each resource? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

Compute:
[Dropdown menu]
- An Mv2-series virtual machine
- HANA on Azure (Large Instances) Type I class
- HANA on Azure (Large Instances) Type II class

Database:
[Dropdown menu]
- Azure NetApp Files
- Premium SSD v2 disks
- Ultra disks

A

Final Answer Area:
Answer Area

Compute:
[HANA on Azure (Large Instances) Type II class]

Database:
[Ultra disks]
Why These Answers Are Correct for AZ-120:
Compute: HANA on Azure (Large Instances) Type II class:
Only Type II (e.g., S960) supports 20 TB memory per node, meeting the scale-out requirement.
Bare-metal ensures non-shared hardware, aligning with SAP HANA certification (SAP Note 2522080).
AZ-120 emphasizes HLI for high-memory SAP deployments.
Database: Ultra disks:
Matches the high-performance storage profile of HLI (e.g., NVMe/SSD used in Type II).
Certified for HANA workloads in Azure, making it the best fit among options, despite HLI’s managed storage.
Reflects AZ-120’s focus on selecting appropriate storage for HANA performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have an Azure subscription.

You plan to deploy an SAP production landscape on Azure.

You need to select a support plan. The solution must meet the following requirements:

Respond to critical impact incidents within one hour.
Minimize costs.

What should you choose?

A. Standard
B. Premier
C. Professional Direct
D. Basic

A

Final Answer:
C. Professional Direct

Why Professional Direct Is Correct:
1-Hour Response:
Professional Direct offers a 1-hour initial response for Severity A incidents (critical impact, e.g., SAP system outage), meeting the requirement precisely.
Minimize Costs:
At $1,000/month, it’s significantly cheaper than Premier (thousands/month) while still providing production-ready support.
Standard ($100/month) is cheaper but fails the response time, and Basic (free) offers no incident support, making Professional Direct the lowest-cost option that meets both needs.
SAP Production Landscape:
SAP on Azure requires robust support for critical systems (e.g., HANA, NetWeaver). Professional Direct ensures rapid response without the overkill of Premier’s enterprise features (e.g., dedicated manager).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You have an on-premises SAP landscape that is hosted on VMware vSphere and contains 50 virtual machines.

You need to perform a lift-and-shift migration to Azure by using Azure Migrate. The solution must minimize administrative effort.

What should you deploy first?
A
an Azure Backup server
B
an Azure VPN gateway
C
an Azure Migrate configuration server
D
an Azure Migrate process server

A

Final Answer:
C. An Azure Migrate configuration server

Closest Correct Answer:
C. An Azure Migrate configuration server: Historically, this term might have been used loosely for the appliance in early Azure Migrate docs, though it’s inaccurate for agentless VMware today.
Why: The replication appliance (first deployed) includes configuration server-like functionality (discovery, coordination). Given the options, “configuration server” is the closest match to the initial deployment step, despite the agentless shift in terminology.
Why “Configuration Server” Is Chosen:
Minimize Administrative Effort: Agentless migration (via replication appliance) avoids installing agents on 50 VMs, reducing effort compared to agent-based (where config/process servers are separate).
First Step: Azure Migrate requires discovering the VMware environment before replication. The appliance (deployed first) handles this, and “configuration server” aligns with the coordination role in older contexts.
AZ-120 Context: The exam tests Azure Migrate for SAP migrations. For VMware, deploying the appliance (misnamed as “configuration server” here) is step one, followed by connectivity (e.g., VPN) and replication.
Option Limitation: No “replication appliance” option; “configuration server” is the best fit among listed choices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You plan to deploy an SAP production landscape on Azure.

You need to estimate how many SAP operations will be processed by the landscape per hour. The solution must minimize administrative effort.

What should you use?
A
SAP Quick Sizer
B
SAP HANA hardware and cloud measurement tools
C
SAP S/4HANA Migration Cockpit
D
SAP GUI

A

Final Answer:
A. SAP Quick Sizer

Why SAP Quick Sizer Is Correct:
Estimating Operations:
Quick Sizer calculates SAPS based on inputs like number of users, transactions per hour, or business processes (e.g., 100 SAPS ≈ 2,000 dialog steps/hour). This directly correlates to “SAP operations processed per hour.”
Outputs resource needs (e.g., CPU, memory) for Azure sizing.
Minimize Administrative Effort:
Web-based tool (accessible via SAP Support Portal), no installation or complex setup.
Users answer a questionnaire (e.g., “100 sales orders/hour”), and it generates results—automated and simple.
Contrast with alternatives requiring deployment (HWCCT) or manual monitoring (SAP GUI).
SAP on Azure:
SAP Quick Sizer is integrated with Azure planning (e.g., Microsoft provides mappings of SAPS to Azure VM SKUs like M-Series or HANA Large Instances).
Recommended in AZ-120 and Azure SAP documentation for greenfield or migration sizing.
Production Landscape:
Ideal for planning a new deployment, ensuring resources match expected throughput.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You plan to deploy two Azure virtual machines that will host an SAP HANA database for an SAP landscape. The virtual machines will be deployed to the same availability set.

You need to meet the following requirements:

  • Ensure that the virtual machines support disk snapshots.
  • Ensure that the virtual machine disks provide submillisecond latency for writes.
  • Ensure that each virtual machine can be allocated disks from a different storage cluster.

Which type of operating system disk and HANA database disk should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

Operating system disk:
[Dropdown menu]
- Azure NetApp Files
- Premium storage
- Ultra disk

HANA database disk:
[Dropdown menu]
- Azure NetApp Files
- Premium storage
- Ultra disk

A

Final Answer Area:
Answer Area

Operating system disk:
[Premium storage]

HANA database disk:
[Ultra disk]
Why These Answers Are Correct for AZ-120:
OS Disk: Premium storage:
Meets snapshot and storage cluster requirements.
Submillisecond latency isn’t critical for OS disks in SAP HANA deployments; Premium SSD is standard and cost-effective (e.g., for M-Series VMs).
Aligns with Azure’s SAP HANA VM guidelines.
HANA Database Disk: Ultra disk:
Meets all requirements: snapshots, submillisecond latency (<1 ms), and separate storage clusters via availability set fault domains.
Essential for HANA’s write-intensive workloads (e.g., log writes), matching SAP certification standards.
Availability Set: Ensures VMs (and their disks) are in different fault domains, satisfying the “different storage cluster” requirement for both disk types.
AZ-120 Context: Tests storage selection for SAP HANA on Azure VMs, balancing performance (Ultra for DB) and cost (Premium for OS).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You are designing an SAP HANA deployment.

You estimate that the database will be 1.8 TB in three years.

You need to ensure that the deployment supports 60,000 IOPS. The solution must minimize costs and provide the lowest latency possible.

Which type of disk should you use?
A
Standard HDD
B
Standard SSD
C
Ultra disk
D
Premium SSD

A

Final Answer:
C. Ultra disk

Why Ultra Disk Is Correct:
60,000 IOPS:
Ultra Disk scales IOPS independently (up to 160,000 per disk), meeting 60,000 IOPS with a single 2 TB disk.
Premium SSD requires multiple disks (e.g., 3-4), increasing complexity and cost.
Lowest Latency:
Ultra Disk’s submillisecond latency (<1 ms) beats Premium SSD (~1-2 ms), Standard SSD (~5-10 ms), and Standard HDD (~10-20 ms), satisfying “lowest possible.”
Minimize Costs:
Single Ultra Disk (2 TB, 60,000 IOPS) ≈ $620/month.
Premium SSD (e.g., 3 × 16 TB for 60,000 IOPS) ≈ $6,000/month—far more expensive due to overprovisioning.
Ultra is the cheapest option that meets IOPS and latency.
1.8 TB Size: Ultra Disk (e.g., 2 TB) covers this with room for growth.
SAP HANA Fit: Certified for HANA on Azure VMs (e.g., M-Series), recommended for data/log volumes due to high IOPS and low latency (SAP Note 2522080).
AZ-120 Context: Tests storage optimization for HANA, balancing performance (IOPS, latency) and cost. Ultra Disk’s flexibility makes it ideal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You are designing the backup solution for an SAP database.

You have an Azure Storage account that is configured as shown in the following exhibit.
Account kind
StorageV2 (general purpose v2)

Performance
● Standard (x) ○ Premium
ℹ This setting cannot be changed after the storage account is created.

Secure transfer required
○ Disabled ● Enabled (x)

Allow Blob public access
● Disabled (x) ○ Enabled

Allow storage account key access
○ Disabled ● Enabled (x)

Allow recommended upper limit for shared access signature (SAS) expiry interval
○ Disabled ● Enabled (x)

Default to Azure Active Directory authorization in the Azure portal
● Disabled (x) ○ Enabled

Minimum TLS version
[Dropdown menu: Version 1.2]

Blob access tier (default)
○ Cool ● Hot (x)

Replication
[Dropdown menu: Geo-redundant storage (GRS)]

Large file shares
● Disabled (x) ○ Enabled

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.
Answer Area

Data in the storage account is stored on [answer choice].
[Dropdown menu options:]
- hard disk drives (HDDs)
- premium solid-state drives (SSDs)
- standard solid-state drives (SSDs)

Backups will be replicated [answer choice].
[Dropdown menu options:]
- to a storage cluster in the same datacenter
- to another Azure region
- to another zone within the same Azure region

A

Final Answers:
Data in the storage account is stored on: Hard disk drives (HDDs)
Backups will be replicated: To another Azure region

Correct Answer: Hard disk drives (HDDs)
Why It’s Correct:

Performance Tier: The storage account is configured with Standard performance, which uses hard disk drives (HDDs) as the underlying storage medium. Standard-tier storage accounts (e.g., StorageV2 with Standard performance) are designed for cost-effective, general-purpose storage and rely on HDDs, unlike Premium-tier accounts, which use SSDs.
Account Kind: StorageV2 (general purpose v2) supports multiple data types (blobs, files, queues, tables) and can be Standard or Premium, but the Standard selection confirms HDD-based storage.
Comparison:
Premium solid-state drives (SSDs): Used in Premium performance tier (e.g., Premium Block Blob or Premium File Shares), not applicable here due to the Standard setting.
Standard solid-state drives (SSDs): Azure offers Standard SSDs as managed disks for VMs (e.g., E-series disks), but storage accounts don’t use this category; Standard storage accounts use HDDs.
SAP Context: For SAP database backups, Standard StorageV2 with HDDs is suitable for cost-effective, long-term storage (e.g., blob backups), though performance-critical workloads (e.g., HANA) typically use SSDs for live data.

Correct Answer: To another Azure region

Why It’s Correct:

Replication Setting: The storage account uses Geo-redundant storage (GRS), which replicates data to a secondary Azure region (e.g., East US data is replicated to West US). GRS provides redundancy across regions for disaster recovery, ensuring backups are available if the primary region fails.
How GRS Works:
Data is first replicated synchronously to three availability zones or data centers within the primary region (local redundancy).
Then, it’s replicated asynchronously to a secondary region hundreds of miles away.
The statement asks where backups “will be replicated,” and GRS’s defining feature is the secondary region replication.
Comparison:
To a storage cluster in the same datacenter: Applies to Locally Redundant Storage (LRS), which replicates within one data center, not GRS.
To another zone within the same Azure region: Applies to Zone-Redundant Storage (ZRS), which replicates across availability zones in the same region, not GRS.
SAP Context: For SAP production landscapes, GRS is often used for backups to ensure regional resilience, aligning with high-availability and disaster recovery goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You have an Azure subscription that contains an SAP HANA on Azure (Large Instances) deployment.

The deployment is forecasted to require an additional 256 GB of storage.

What is the minimum amount of additional storage you can allocate?
A
256 GB
B
512 GB
C
1 TB
D
2 TB

A

Final Answer:
C. 1 TB

Why 1 TB Is Correct:
Minimum Increment: Microsoft’s HANA Large Instances storage expansion is standardized at 1 TB increments to maintain performance, consistency, and manageability (e.g., RAID configurations, NVMe/SSD allocation).
256 GB Requirement: While the forecast is 256 GB, the smallest additional allocation available is 1 TB. Customers request this via a support ticket, and Microsoft provisions it.
AZ-120 Context: The exam tests understanding of HLI specifics, including storage constraints. Unlike Azure VMs (where Ultra/Premium disks can scale granularly), HLI uses fixed tiers, and 1 TB is the documented minimum for additional storage.
Cost/Practicality: 1 TB is the smallest unit that meets and exceeds 256 GB, avoiding custom provisioning not offered by Microsoft.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You have an Azure subscription. The subscription contains two virtual machines named SQL1 and SQL2 that host a Microsoft SQL Server 2019 Always On availability group named AOG1.

You plan to deploy an SAP NetWeaver system that will have a database tier hosted on AOG1.

You need to configure networking for SQL1 and SQL2. The solution must meet the following requirements:

  • Eliminate the need to create a distributed network name (DNN).
  • Minimize costs.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area

Deploy SQL1 and SQL2 to:
[Dropdown menu options:]
- The same subnet on a virtual network
- Two different subnets on the same virtual network
- Two different virtual networks

Configure IP addressing by:
[Dropdown menu options:]
- Assigning two different IP addresses to the availability group listener
- Assigning two IP addresses to the primary network interface on each virtual machine
- Creating two network interfaces on each virtual machine and assigning a different IP address to each interface

A

Final Answer Area:
Answer Area

Deploy SQL1 and SQL2 to:
[The same subnet on a virtual network]

Configure IP addressing by:
[Assigning two different IP addresses to the availability group listener]
Why These Answers Are Correct for AZ-120:
Same Subnet:
VNN with ILB requires SQL1 and SQL2 in the same subnet for the backend pool, eliminating DNN (which supports multi-subnet).
Minimizes cost by avoiding VNet peering or extra subnets.
Aligns with SAP HA setups on Azure (SQL AG with ILB).
Two IPs to Listener:
VNN uses an ILB with a single listener IP, but the “two IPs” option likely reflects a misphrased intent (e.g., listener IP + probe IP or multi-subnet confusion). It’s the closest match to VNN configuration avoiding DNN.
Minimizes cost by not adding NICs or excessive IPs to VMs.
AZ-120 Context: Tests SQL Server AG networking for SAP. VNN+ILB is a standard, cost-effective HA solution when DNN is excluded.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have an on-premises network and an Azure subscription.

You plan to deploy a standard three-tier SAP architecture to a new Azure virtual network.

You need to configure network isolation for the virtual network. The solution must meet the following requirements:

  • Allow client access from the on-premises network to the presentation servers.
  • Only allow the application servers to communicate with the database servers.
  • Only allow the presentation servers to access the application servers.
  • Block all other inbound traffic.

What is the minimum number of network security groups (NSGs) and subnets required? To answer, drag the appropriate number to the correct targets. Each number may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.
Number Answer Area
——————————–
1
2 NSGs: [____]
3
4 Subnets: [____]

A

Final Answer Area:
Answer Area
NSGs: [3]
Subnets: [3]
Why This Is Correct for AZ-120:
Subnets: 3:
Matches SAP’s three-tier architecture (presentation, application, database), a standard in Azure deployments (SAP Note 2522080).
Enables isolation and meets requirements (e.g., app-to-DB only) by segmenting tiers.
Minimum number to satisfy the traffic flow rules without overlap.
NSGs: 3:
One NSG per subnet provides precise control:
Presentation: On-premises access only.
Application: Presentation access only, outbound to DB.
Database: Application access only.
Blocks all other inbound traffic with deny rules (default deny-all applies after allow rules).
Aligns with Azure best practices for SAP network security.
AZ-120 Context: Tests SAP network design on Azure, emphasizing isolation and security with minimal complexity. Three subnets with three NSGs is the standard, efficient solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have an SAP landscape on Azure that contains the virtual machines shown in the following table.
Name Role Azure Availability Zone in East US
SAPAPP1 Application Server Zone 1
SAPAPP2 Application Server Zone 2

You need to ensure that the Application Server role is available if a single Azure datacenter fails.

What should you include in the solution?
A
Azure Virtual WAN
B
Azure Basic Load Balancer
C
Azure Application Gateway v2
D
Azure AD Application Proxy

A

Final Answer:
B. Azure Basic Load Balancer

Why Azure Basic Load Balancer Is Correct:
Zonal HA:
Basic Load Balancer (zone-redundant configuration) places its frontend IP across all zones in East US.
Backend pool includes SAPAPP1 (Zone 1) and SAPAPP2 (Zone 2).
If Zone 1 fails, traffic is automatically routed to SAPAPP2 in Zone 2, ensuring the Application Server role remains available.
SAP NetWeaver:
Application servers handle client requests (e.g., SAP GUI, RFC). A load balancer distributes traffic (e.g., port 3200 for instance 00) across instances, a standard SAP HA pattern on Azure.
Minimize Costs:
Basic Load Balancer is cheaper than Application Gateway and sufficient for Layer 4 needs (SAP doesn’t require Layer 7 features here).
AZ-120 Context:
The exam emphasizes HA solutions for SAP tiers. Basic Load Balancer is recommended for SAP application servers in Azure documentation (e.g., SAP on Azure HA guide), especially with Availability Zones.
Existing Setup: VMs are already in different zones, so no clustering (e.g., Windows Failover Clustering) is implied—just load balancing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You have an SAP ERP Central Component (SAP ECC) deployment on Azure virtual machines. The virtual machines run Windows Server 2022 and are members of an Active Directory domain named contoso.com.

You install SAP GUI on an Azure virtual machine named VM1 that runs Windows 10.

You need to ensure that contoso.com users can sign in to SAP ECC via SAP GUI on VM1 by using their domain credentials.

What should you do? To answer, drag the appropriate components to the correct tasks. Each component may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.
Components Answer Area
—————————————————————
ABAP Central Services (ASCS) Modify the instance profile for: [_________]
Primary Application Server (PAS) Run the SNC Kerberos Configuration for SAP GUI on: [_________]
SAP Web Dispatcher
VM1 Configure SAP Logon on: [_________]

A

Final Answer Area:
Answer Area
Modify the instance profile for: [Primary Application Server (PAS)]
Run the SNC Kerberos Configuration for SAP GUI on: [VM1]
Configure SAP Logon on: [VM1]
Why This Is Correct for AZ-120:
PAS for Instance Profile:
SNC/Kerberos SSO requires server-side configuration in the SAP system’s instance profile, specifically on the application server (PAS) handling dialog work processes. This is standard for ABAP systems like ECC (SAP Note 352295).
ASCS profiles are more for central services, not user logins.
VM1 for SNC Kerberos Configuration:
Client-side SNC setup occurs on the machine running SAP GUI (VM1). Installing the Kerberos library and configuring it for contoso.com enables SSO, minimizing user effort.
VM1 for SAP Logon:
SAP Logon configuration on VM1 ties the client to the SAP system with SNC enabled, completing the SSO chain from AD to ECC.
AZ-120 Context: The exam tests SAP integration with Azure AD domains for SSO. This solution leverages Kerberos (common for Windows-based SAP ECC) and aligns with Azure SAP deployment best practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
You are deploying an SAP production landscape on Azure. You deploy virtual machines that have SAP Digital Boardroom and SAP HANA installed. You need to measure network latency between the virtual machines. What should you use? A Network Performance Monitor B lometer C Connection Monitor in Azure Network Watcher D SockPerf
Correct Answer: C. Connection Monitor in Azure Network Watcher Why It’s Correct: Azure Integration: Connection Monitor is a native Azure tool within Network Watcher, making it seamless to use in an Azure-based SAP deployment. It integrates with Azure Monitor for logging and alerting, which is valuable for a production environment. Continuous Monitoring: Unlike SockPerf, which is a manual testing tool, Connection Monitor provides ongoing latency measurements, which is critical for maintaining performance in an SAP production landscape. Post-Deprecation of NPM: Since Network Performance Monitor is deprecated as of 2024, Connection Monitor is the recommended replacement for network performance monitoring in Azure, aligning with current best practices. SAP Relevance: SAP workloads, including SAP HANA and SAP Digital Boardroom, require low-latency networks. Connection Monitor can measure RTT between VMs, helping ensure optimal performance for these latency-sensitive applications. Exam Context (AZ-120): The AZ-120 exam ("Planning and Administering Microsoft Azure for SAP Workloads") focuses on Azure-specific solutions for SAP deployments. Connection Monitor, as an Azure-native tool, is more likely to be the expected answer compared to a third-party tool like SockPerf.
26
You have 100 Azure virtual machines that host SAP workloads and have the SAP Host Agent and the SAP Adaptive Extensions installed. You plan to deallocate the virtual machines during non-business hours. You need to change the managed disk type of the virtual machines when they are deallocated. The solution must minimize administrative effort. What should you use? A SAP Information Lifecycle Management (ILM) B SAP Landscape Management (LaMa) C Azure Functions D Azure Automation
Correct Answer: D. Azure Automation Why It’s Correct: Minimized Administrative Effort: Azure Automation allows you to create a runbook (e.g., using PowerShell) to change the disk type of deallocated VMs and schedule it to run during non-business hours. Once configured, it requires little ongoing maintenance, aligning with the requirement to minimize effort. Scalability: It can easily manage 100 VMs by iterating through a list of resources or using tags, making it suitable for the scale described. Azure-Native Solution: As a built-in Azure service, it integrates seamlessly with Azure VMs and managed disks, leveraging cmdlets like Get-AzVM and Set-AzDisk to update disk types (e.g., from Premium SSD to Standard SSD or vice versa) when VMs are deallocated. SAP on Azure Context (AZ-120): The AZ-120 exam emphasizes Azure tools and services for managing SAP workloads. Azure Automation is a key tool for automating infrastructure tasks in Azure SAP deployments, making it a strong fit for this question. Disk Type Change Requirement: Managed disk types can only be changed when a VM is deallocated, and Azure Automation can be triggered or scheduled to run after deallocation, ensuring the operation aligns with the scenario.
27
You have an Azure subscription that contains 10 virtual machines. You plan to deploy an SAP landscape on Azure that will run SAP HANA. You need to ensure that the virtual machines meet the performance requirements of HANA. What should you use? A ABAP Profiler B SAP HANA Hardware and Cloud Measurement Tool (HCMT) C Azure Advisor D SAP Quick Sizer
Correct Answer: B. SAP HANA Hardware and Cloud Measurement Tool (HCMT) Why It’s Correct: Purpose-Built for HANA: HCMT is specifically designed to test and validate the performance of infrastructure (e.g., Azure VMs) against SAP HANA’s stringent requirements, including disk throughput, latency, CPU performance, and memory bandwidth. Performance Validation: The question asks how to “ensure” the VMs meet HANA’s performance requirements, implying a need to measure and confirm the deployed infrastructure’s capability. HCMT provides this by running benchmarks and producing a detailed report. Azure SAP Context (AZ-120): The AZ-120 exam focuses on deploying and managing SAP workloads on Azure, including ensuring HANA-certified configurations. HCMT is a key tool recognized by SAP and Microsoft for validating Azure VMs (e.g., M-series VMs) for HANA deployments. Practical Application: For 10 existing VMs, HCMT can be executed on each to verify that their configuration (e.g., VM size, attached disks) meets SAP’s KPIs, ensuring the SAP landscape deployment will perform as expected.
28
HOTSPOT - You have an on-premises SAP landscape. You plan to deploy SAP HANA on Azure (Large Instances) to the landscape. You need to recommend a networking solution that meets the following requirements: * Ensures low latency between HANA Large Instances and SAP applications * Supports using SAP Solution Manager on-premises How should you recommend configuring the network? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area To connect to HANA Large Instances, use: [________________] - Azure Virtual WAN - ExpressRoute Direct - ExpressRoute FastPath Place Azure virtual machines in the same: [________________] - Instance pool - Resource group - Proximity placement group For transitive routing, use: [________________] - Azure Firewall - Azure Traffic Manager - Azure Application Gateway
Final Answers: To connect to HANA Large Instances, use: ExpressRoute FastPath Place Azure virtual machines in the same: Proximity placement group For transitive routing, use: Azure Firewall Why It’s Correct: The requirement to “support using SAP Solution Manager on-premises” implies that the on-premises SAP landscape (including Solution Manager) needs connectivity to both HANA Large Instances and Azure VMs. This requires transitive routing, where traffic can flow between on-premises, Azure VMs, and HLI through a hub-and-spoke topology. Azure Firewall is a managed network security service that can be deployed in a virtual hub or hub VNet to enable transitive routing. It supports routing traffic between on-premises (via ExpressRoute), Azure VNets (hosting SAP app VMs), and HANA Large Instances, ensuring secure and controlled connectivity. In SAP HLI deployments, Azure Firewall is often used in the hub VNet to manage traffic flow. Azure Traffic Manager is a DNS-based load balancer for directing traffic across regions, not a routing solution for transitive connectivity within a hybrid network. Azure Application Gateway is an application-layer (Layer 7) load balancer for web traffic, not suitable for the network-layer transitive routing needed here. AZ-120 Context: Transitive routing in SAP-on-Azure hybrid scenarios often involves hub-and-spoke architectures with Azure Firewall to connect on-premises systems (like SAP Solution Manager) to Azure resources.
29
HOTSPOT - You plan to deploy an SAP Web Dispatcher named SAP2 by using an Azure Resource Manager template. You need to configure the template to support the deployment. How should you complete the template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { }, "variables": { }, "resources": [ { "apiVersion": "2019-11-01", "type": "Microsoft.Network/networkInterfaces", "name": "SAP2-NI", "location": "[resourceGroup().location]", "properties": { "ipConfigurations": [ { "name": "ipconfig1", "properties": { "privateIPAllocationMethod": "Dynamic", "subnet": { "id": "[ _________________ (resourceId('Microsoft.Network/virtualNetworks', 'VNET1'), '/subnets/default')]" } } } ], "enableAcceleratedNetworking": true } }, { ... }, "osDisk": { "name": "SAP2-OS", "caching": "ReadWrite", "createOption": "FromImage", "diskSizeGB": 128, "managedDisk": { "storageAccountType": "[parameters('StorageType')]" } }, "copy": [ { "name": "DataDisks", "count": "3", "input": { "Caching": "None", "diskSizeGB": 2048, "lun": "[ _________________ ('datadisks')]", "name": "[ _________________ ,copyIndex('datadisks')]", "createOption": "Empty" } } ] ] } Blank1: Add Concat Substring union Blank2: Concat CopyIndex Length Max
Final Answers: Blank 1: Concat "id": "[concat(resourceId('Microsoft.Network/virtualNetworks', 'VNET1'), '/subnets/default')]" Blank 2 (LUN): CopyIndex "lun": "[copyIndex('datadisks')]" Blank 2 (Name): Concat "name": "[concat('SAP2-DataDisk', copyIndex('datadisks'))]" (assuming a base name like SAP2-DataDisk is implied or defined elsewhere). Why Correct: Blank 1 (Concat): Combines the VNet resource ID with the subnet path, a common ARM pattern for referencing subnets. Blank 2 (CopyIndex for LUN): Provides sequential LUNs (0, 1, 2) for the 3 data disks, aligning with VM disk attachment requirements. Blank 2 (Concat for Name): Dynamically generates unique disk names by appending the iteration index, essential for SAP Web Dispatcher storage configuration. AZ-120 Context: Deploying SAP Web Dispatcher requires precise ARM template configuration for networking and storage, and these functions are standard for Azure SAP deployments.
30
HOTSPOT - You are building an SAP on Azure production landscape that will contain an Azure virtual machine named VM1. VM1 will host the SAPRouter service. You plan to inspect all the network traffic between the SAP external network and the SAPRouter service on VM1 by using Azure Firewall. You need to ensure that the SAPRouter service on VM1 can communicate with the SAP external network by using Azure Firewall. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area To the saprouttab file, add the: [ _______________________________ ] - Private IP address of Azure Firewall - Private IP address of VM1 - Public IP address of Azure Firewall In Azure Firewall, create: [ _______________________________ ] - A DNAT rule - A network rule - An application rule
Final Answers: To the saprouttab file, add the: Public IP address of Azure Firewall In Azure Firewall, create: A network rule Correct Answer: Public IP address of Azure Firewall Why It’s Correct: SAPRouter Overview: SAPRouter is a proxy service that facilitates secure communication between SAP systems and external networks (e.g., SAP’s support network). The saprouttab file is its routing configuration file, which defines permitted connections using rules like P . Scenario Context: VM1 (hosting SAPRouter) needs to communicate with the SAP external network, and all traffic must pass through Azure Firewall for inspection. In a typical Azure setup, Azure Firewall is deployed in a hub VNet, and external traffic enters via its public IP address. Routing Requirement: For SAPRouter to connect to the SAP external network (e.g., SAP’s servers), it must route outbound traffic through Azure Firewall. The saprouttab file should specify the destination as the Azure Firewall’s public IP, which acts as the gateway to the external network. For inbound traffic (e.g., from SAP to VM1), the firewall forwards it to VM1, but the question focuses on VM1’s ability to "communicate with" the external network, implying outbound configuration. Why Public IP: Azure Firewall’s public IP is used for external communication in hybrid scenarios. SAPRouter on VM1 would send traffic to this IP, which Azure Firewall then inspects and forwards to the SAP external network (e.g., oss001.sap-ag.de). Correct Answer: A network rule Why It’s Correct: Azure Firewall Rules Overview: DNAT Rule: Destination Network Address Translation rules translate incoming traffic’s destination IP (e.g., Firewall’s public IP) to an internal IP (e.g., VM1’s private IP). This is used for inbound traffic from the external network to VM1. Network Rule: Defines Layer 3/4 rules (IP, port, protocol) to allow or deny traffic between source and destination IPs. This is used for both inbound and outbound traffic inspection and forwarding. Application Rule: Operates at Layer 7, filtering traffic based on FQDNs or application protocols (e.g., HTTP), not applicable to SAPRouter’s port-based communication. Scenario Requirement: The question states you need to “inspect all network traffic” and ensure VM1 “can communicate with” the SAP external network. This implies bidirectional traffic (outbound from VM1 to SAP and potentially inbound from SAP to VM1), all routed through Azure Firewall. Why Network Rule: SAPRouter uses TCP port 3299 for communication, a Layer 4 protocol. A network rule in Azure Firewall can allow outbound traffic from VM1’s private IP to the SAP external network (e.g., SAP’s IP range) via the firewall, and optionally inbound traffic from the SAP network to VM1 if needed. Network rules are sufficient for IP-and-port-based filtering and inspection, matching SAPRouter’s behavior. DNAT Consideration: A DNAT rule would be required if the SAP external network initiates connections to VM1 (e.g., for SAP support callbacks), translating the firewall’s public IP to VM1’s private IP. However, the question’s phrasing focuses on VM1 communicating outbound (“with the SAP external network”), and inspection implies firewall rules for traffic flow, not just inbound NAT. A network rule covers the broader requirement.
31
You plan to deploy a highly available SAP HANA deployment on Azure that will be hosted on a Pacemaker cluster. You need to configure the security principal of the Azure fence agent for the cluster. The solution must minimize administrative effort. What should you use? A a user-assigned managed identity B a system-assigned managed identity C a service principal D Azure shared disks
Correct Answer: B. System-assigned managed identity Why It’s Correct: Azure Fence Agent Support: The Azure fence agent (fence_azure_arm) supports managed identities (both system-assigned and user-assigned) as of recent updates to SUSE/RHEL Pacemaker implementations for SAP HANA on Azure. System-assigned identities are the default recommendation in Microsoft’s SAP HA documentation. Minimized Administrative Effort: Enabling a system-assigned identity on each VM (e.g., via Azure portal, CLI, or ARM template) is a one-step process per VM. Assigning a role (e.g., Virtual Machine Contributor) to the identity for the resource group containing the VMs can be done once and applied to both nodes. No credentials or secrets to manage, unlike a service principal, and no separate resource to create, unlike a user-assigned identity. SAP HANA HA Context: For a two-node Pacemaker cluster (e.g., HANA System Replication with primary and secondary nodes), each VM needs to authenticate to Azure to fence the other node. System-assigned identities simplify this by tying the identity to each VM automatically. AZ-120 Alignment: The exam emphasizes best practices for SAP HA on Azure, including leveraging managed identities to streamline security and reduce operational overhead, a key improvement over older service principal-based approaches.
32
HOTSPOT - Case Study - This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study - To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Overview - Contoso, Ltd. is a manufacturing company that has 15,000 employees. The company uses SAP for sales and manufacturing. Contoso has sales offices in New York and London and manufacturing facilities in Boston and Seattle. Existing Environment - Active Directory - The network contains an on-premises Active Directory domain named ad.contoso.com. User email addresses use a domain name of contoso.com. SAP Environment - The current SAP environment contains the following components: * SAP Solution Manager * SAP ERP Central Component (SAP ECC) * SAP Supply Chain Management (SAP SCM) * SAP application servers that run Windows Server 2008 R2 * SAP HANA database servers that run SUSE Linux Enterprise Server 12 (SLES 12) Problem Statements - Contoso identifies the following issues in its current environment: * The SAP HANA environment lacks adequate resources. * The Windows servers are nearing the end of support. * The datacenters are at maximum capacity. Requirements - Planned Changes - Contoso identifies the following planned changes: * Deploy Azure Virtual WAN. * Migrate the application servers to Windows Server 2016. * Deploy ExpressRoute connections to all of the offices and manufacturing facilities. * Deploy SAP landscapes to Azure for development, quality assurance, and production. All resources for the production landscape will be in a resource group named SAPProduction. Business goals - Contoso identifies the following business goals: * Minimize costs whenever possible. * Migrate SAP to Azure without causing downtime. * Ensure that all SAP deployments to Azure are supported by SAP. * Ensure that all the production databases can withstand the failure of an Azure region. * Ensure that all the production application servers can restore daily backups from the last 21 days. Technical Requirements - Contoso identifies the following technical requirements: * Inspect all web queries. * Deploy an SAP HANA cluster to two datacenters. * Minimize the bandwidth used for database synchronization. * Use Active Directory accounts to administer Azure resources. * Ensure that each production application server has four 1-TB data disks. * Ensure that an application server can be restored from a backup created during the last five days within 15 minutes. * Implement an approval process to ensure that an SAP administrator is notified before another administrator attempts to make changes to the Azure virtual machines that host SAP. It is estimated that during the migration, the bandwidth required between Azure and the New York office will be 1 Gbps. After the migration, a traffic burst of up to 3 Gbps will occur. Proposed Backup Policy - An Azure administrator proposes the backup policy shown in the following exhibit. * Policy name [ SapPolicy ✓ ] Backup schedule * Frequency [ Daily ▼ ] * Time [ 3:30 AM ▼ ] * Timezone [ (UTC) Coordinated Universal Time ▼ ] Instant Restore Retain instant recovery snapshot(s) for [ 5 ▼ ] Day(s) Retention range ☑ Retention of daily backup point. * At [ 3:30 AM ▼ ] * For [ 14 ▼ ] Day(s) ☑ Retention of weekly backup point. * On [ Sunday ▼ ] * At [ 3:30 AM ▼ ] * For [ 8 ▼ ] Week(s) ☑ Retention of monthly backup point. ( Week Based Day Based ) * On [ First ▼ ] * Day [ Sunday ▼ ] * At [ 3:30 AM ▼ ] * For [ 12 ▼ ] Month(s) ☑ Retention of yearly backup point. ( Week Based Day Based ) * In [ January ▼ ] * On [ First ▼ ] * Day [ Sunday ▼ ] * At [ 3:30 AM ▼ ] * For [ 7 ▼ ] Year(s) Azure Resource Manager Template An Azure administrator provides you with the Azure Resource Manager template that will be used to provision the production application servers. { "apiVersion": "2017-03-30", "type": "Microsoft.Compute/virtualMachines", "name": "[parameters('vmname')]", "location": "EastUS", "dependsOn": [ "[resourceId('Microsoft.Network/networkInterfaces/', parameters('vmname'))]" ], "properties": { "hardwareProfile": { "vmSize": "[parameters('vmSize')]" }, "osProfile": { "computerName": "[parameters('vmname')]", "adminUsername": "[parameters('adminUsername')]", "adminPassword": "[parameters('adminPassword')]" }, "storageProfile": { "imageReference": { "publisher": "MicrosoftWindowsServer", "offer": "WindowsServer", "sku": "2016-datacenter", "version": "latest" }, "osDisk": { "name": "[concat(parameters('vmname'), '-OS')]", "caching": "ReadWrite", "createOption": "FromImage", "diskSizeGB": 128, "managedDisk": { "storageAccountType": "[parameters('storageAccountType')]" } } }, "copy": [ { "name": "DataDisks", "count": "[parameters('diskCount')]", "input": { "caching": "None", "diskSizeGB": 1024, "lun": "[copyIndex('datadisks')]" } } ] } } You are evaluating the proposed backup policy. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements If the backup policy is implemented, a file backed up on the first Sunday of a month can be restored one year after the file was deleted. The backup policy meets the technical requirements. The backup policy meets the bisiness requirements.
Final Answers: Statement 1: Yes Statement 2: Yes Statement 3: No Answer Area Analysis: Statement 1: "If the backup policy is implemented, a file backed up on the first Sunday of a month can be restored one year after the file was deleted." Correct Answer: Yes Why It’s Correct: The backup policy includes retention rules for monthly and yearly backups: Monthly retention: Keeps backups from the first Sunday of each month for 12 months. Yearly retention: Keeps backups from the first Sunday of January for 7 years. Consider a file backed up on the first Sunday of a month (e.g., January 5, 2025, assuming it’s the first Sunday): If this is not January, the monthly retention applies, retaining it for 12 months (1 year). If the file is deleted immediately after backup (e.g., January 6, 2025), the backup remains available until January 5, 2026—exactly 1 year after deletion. If this is January (e.g., first Sunday is January 5, 2025), the yearly retention applies, retaining it for 7 years (until January 5, 2032). Even if deleted on January 6, 2025, it’s restorable 1 year later (January 6, 2026). In both cases, the file can be restored “one year after the file was deleted” because: Monthly retention covers non-January first Sundays for 12 months. Yearly retention covers January first Sundays for 7 years, far exceeding 1 year. The statement doesn’t specify the month, so as long as the retention period is at least 1 year (which it is), it holds true. AZ-120 Context: Understanding backup retention periods is key for SAP workloads on Azure. Statement 2: "The backup policy meets the technical requirements." Correct Answer: Yes Why It’s Correct: Relevant Technical Requirement: “Ensure that an application server can be restored from a backup created during the last five days within 15 minutes.” Evaluation: 5-day availability: The policy includes: Instant Restore: Retains snapshots for 5 days, allowing rapid recovery from any backup within this window. Daily retention: Keeps backups for 14 days, covering the last 5 days even beyond the Instant Restore period. Thus, backups from the last 5 days are always available. 15-minute restore time: Azure Backup’s Instant Restore feature uses snapshots stored in the Recovery Services vault, enabling VM restoration in minutes (typically under 15 minutes for OS and data disk recovery, depending on size and configuration). This meets the 15-minute requirement for the 5-day window. Beyond 5 days (up to 14 days daily retention), restores from the vault may take longer (e.g., 30+ minutes), but the requirement only specifies the last 5 days, which Instant Restore covers. No other technical requirements (e.g., 4 x 1-TB disks) directly relate to the backup policy’s evaluation here, and the ARM template confirms compliance with disk sizing separately. AZ-120 Context: Azure Backup configuration for SAP workloads must meet strict recovery time objectives (RTO), and Instant Restore is a key feature tested in the exam. Statement 3: "The backup policy meets the business requirements." Correct Answer: No Why It’s Correct: Relevant Business Goal: “Ensure that all the production application servers can restore daily backups from the last 21 days.” Evaluation: The policy’s daily retention is 14 days, meaning daily backups are kept for only 14 days. Beyond 14 days, only weekly (8 weeks), monthly (12 months), and yearly (7 years) backups are retained: Weekly backups (Sundays) don’t provide daily granularity for days 15–21. For example, if today is March 23, 2025, daily backups are available back to March 9, 2025 (14 days). Days March 8–March 2 (15–21 days ago) are not covered by daily backups, only by the prior Sunday’s weekly backup (if applicable). The requirement specifies “daily backups from the last 21 days,” implying all 21 days must have a daily restore point, not just weekly snapshots for days 15–21. The policy falls short by 7 days (14 vs. 21). Other business goals (e.g., minimize costs, no downtime) aren’t contradicted by the policy but aren’t sufficient to override this specific unmet requirement.
33
HOTSPOT - You plan to implement a deployment of SAP NetWeaver on Azure. The deployment will be hosted on virtual machines that run Windows Server 2022. You need to configure an authentication solution for the deployment. The solution must meet the following requirements: * Support single sign-on (SSO) and multi-factor authentication (MFA) for SAP NetWeaver applications. * Minimize administrative effort. What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Authenticate the virtual machines by using: [Dropdown] - Active Directory Domain Services (AD DS) - Azure AD - Azure Active Directory Domain Services (Azure AD DS) Authenticate the SAP NetWeaver applications by using: [Dropdown] - Active Directory Domain Services (AD DS) - Azure AD - Azure Active Directory Domain Services (Azure AD DS)
Final Answers: Authenticate the virtual machines by using: Azure Active Directory Domain Services (Azure AD DS) Authenticate the SAP NetWeaver applications by using: Azure AD Correct Answer: Azure Active Directory Domain Services (Azure AD DS) Why It’s Correct: Context: The virtual machines (VMs) running Windows Server 2022 host the SAP NetWeaver deployment. Authenticating the VMs means joining them to a domain so they can leverage domain-based authentication for system-level access (e.g., admin logins, service accounts). Azure AD DS: Azure AD DS provides a managed domain service that syncs with Azure AD, allowing VMs to be domain-joined without requiring on-premises AD DS infrastructure. Windows Server 2022 VMs can join an Azure AD DS domain, enabling Kerberos/NTLM authentication for system-level operations. It minimizes administrative effort because it’s a fully managed service—no need to deploy, manage, or patch domain controllers. Requirements Fit: While SSO and MFA are specified for SAP NetWeaver applications (next blank), VM authentication must support the underlying infrastructure. Azure AD DS integrates with Azure AD (which supports MFA/SSO) and provides the domain join capability needed for Windows Server VMs in an SAP landscape. Correct Answer: Azure AD Why It’s Correct: Context: SAP NetWeaver applications (e.g., SAP GUI, Web Dynpro) need an authentication solution for end-user access, supporting SSO and MFA. Azure AD: Azure AD is Microsoft’s cloud identity platform that natively supports single sign-on (SSO) via SAML, OpenID Connect, or OAuth, and multi-factor authentication (MFA) for users accessing SAP NetWeaver applications. SAP NetWeaver can be configured to use Azure AD as an Identity Provider (IdP) via SAML 2.0 for web-based interfaces (e.g., SAP NetWeaver AS ABAP or Java), enabling SSO across SAP and other Azure AD-integrated apps. MFA can be enforced through Azure AD Conditional Access policies. It minimizes administrative effort by leveraging a cloud-managed service with built-in SSO/MFA capabilities, avoiding the need to manage on-premises identity infrastructure. Integration with SAP: SAP NetWeaver supports SAML-based SSO, and Azure AD is a supported IdP. Users authenticate via Azure AD, and SAP trusts the SAML assertions for seamless access. For SAP GUI, SSO can be achieved using Kerberos (via Azure AD DS or AD DS) with Azure AD Seamless SSO, but the question emphasizes application-level SSO/MFA, pointing to Azure AD’s cloud capabilities.
34
You have an on-premises SAP Enterprise Central Component (ECC) landscape that is hosted on servers that run Windows Server and uses an Oracle database. You need to migrate the landscape to SAP S/4HANA on Azure virtual machines. The solution must minimize downtime. What should you use? A Azure Site Recovery B Software Update Manager (SUM) C Software Provisioning Manager (SWPM) D Azure Migrate
Correct Answer: B. Software Update Manager (SUM) Why It’s Correct: Comprehensive Migration: SUM with DMO handles both the application conversion (ECC to S/4HANA) and database migration (Oracle to HANA) in a single process, tailored for SAP landscapes. Minimized Downtime: DMO optimizes downtime by: Pre-migrating data to the target SAP HANA database on Azure VMs during uptime. Performing a final delta sync and cutover during a brief downtime window (potentially hours, depending on data size and optimization). Downtime-optimized DMO further reduces this by parallelizing tasks. Azure Integration: SUM/DMO is supported for Azure migrations when Azure VMs are pre-provisioned with SAP HANA, aligning with the target environment. Infrastructure setup (e.g., via Azure portal or ARM templates) complements SUM but isn’t the focus of the question.
35
DRAG DROP - You have a bill of materials (BOM) that describes SAP deployments. You plan to automate the implementation of an SAP S4/HANA deployment to Azure by using the SAP deployment automation framework on Azure. You need to generate the SAP application templates for the planned implementation and update the BOM. In which order should you perform the actions? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order. Actions - Install the SAP HANA and SAP Central Services (SCS) instances. - Load the database content. - Generate and combine the parameter files of the application servers. - Generate an ABAP Central Services (ASCS) parameter file. - Add the templates to the BOM. [Arrow buttons for moving selections between columns] Answer Area
Correct Order in Answer Area: Generate an ABAP Central Services (ASCS) parameter file Generate and combine the parameter files of the application servers Install the SAP HANA and SAP Central Services (SCS) instances Load the database content Add the templates to the BOM Why Correct: Step 1 (Generate ASCS parameter file): Starts with the core central services configuration, a prerequisite for the SAP application tier. Step 2 (Generate and combine app server parameter files): Builds the application layer templates, dependent on ASCS, ensuring a complete application configuration. Step 3 (Install HANA and SCS): Deploys the infrastructure and installs critical components using the generated templates, aligning with SDAF’s IaC approach. Step 4 (Load database content): Populates the HANA database post-installation, a necessary step for a functional S/4HANA system. Step 5 (Add templates to BOM): Finalizes the process by updating the BOM with all generated templates, completing the automation preparation.
36
DRAG DROP - You have an Azure subscription that is linked to an Azure AD tenant. The subscription contains a virtual machine named VM1. You install SAP Landscape Management (LaMa) on VM1. You need to ensure that you can use SAP LaMa to manage the deployment of SAP workloads to Azure virtual machines. The solution must minimize administrative effort. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions - From SAP Landscape Management, create an SAP LaMa connector. - From the Azure portal, create an app registration. - From the Azure portal, create a user-assigned managed identity. - From the Azure portal, enable a system-assigned managed identity for VM1. - From the Azure portal, assign the Contributor role to the managed identity for the subscription. [Arrow buttons for moving selections between columns] Answer Area
Correct Order in Answer Area: From the Azure portal, enable a system-assigned managed identity for VM1 From the Azure portal, assign the Contributor role to the managed identity for the subscription From SAP Landscape Management, create an SAP LaMa connector Why Correct: Step 1 (Enable system-assigned managed identity): Provides VM1 (hosting SAP LaMa) with an Azure AD identity, the foundation for authentication, with no manual credential management. Step 2 (Assign Contributor role): Grants the necessary permissions for SAP LaMa to manage SAP workload VMs across the subscription, a prerequisite for the connector to function. Step 3 (Create SAP LaMa connector): Configures SAP LaMa to use the identity and permissions, enabling it to deploy and manage SAP workloads on Azure. Minimized Effort: Using a system-assigned managed identity avoids the overhead of creating and maintaining an app registration or user-assigned identity, aligning with the requirement. AZ-120 Context: The exam emphasizes integrating SAP tools like LaMa with Azure, favoring managed identities for modern, low-effort authentication solutions.
37
DRAG DROP - You plan to deploy an SAP production landscape on Azure. The landscape will use SAP HANA databases that run on Azure virtual machines. Each HANA virtual machine will contain the following three premium data disks: * Shared * Data * Log You need to configure caching on the data disks. The solution must meet the following requirements: * Maximize data throughput. * Minimize potential data loss. Which caching configuration should you use for each disk? To answer, drag the appropriate caching configurations to the correct disks. Each caching configuration may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Caching configurations Answer Area No Caching /hana/data: [________] Read cache /hana/log: [________] Write Accelerator /hana/shared: [________]
Correct Configuration in Answer Area: /hana/data: [No Caching] /hana/log: [Write Accelerator] /hana/shared: [No Caching] /hana/data: No Caching Why Correct: SAP HANA’s data disk requires high throughput for both reads and writes, as it handles large volumes of transactional and analytical data. No Caching (None) ensures all I/O operations go directly to the premium SSD, maximizing throughput by leveraging the disk’s native performance (e.g., up to 20,000 IOPS for P50 disks) without cache overhead. It minimizes potential data loss because writes are committed to persistent storage immediately, avoiding risks of cache volatility. SAP’s certification for HANA on Azure recommends “None” for data disks to ensure consistent performance and durability (per SAP Note 2569261). /hana/log: Write Accelerator Why Correct: The log disk handles transaction logs, where write latency is critical to ensure quick commit times and minimize data loss during failures (e.g., crash recovery depends on durable logs). Write Accelerator enhances write performance by caching write operations in memory, then flushing them to premium SSDs with low latency, while Azure’s backend ensures durability (replication to persistent storage). It maximizes throughput for sequential writes (log workload) and meets HANA’s strict latency requirements (<1 ms for commits). Supported only on M-series VMs (common for HANA) and explicitly recommended by SAP and Microsoft for HANA log volumes (per Azure documentation). /hana/shared: No Caching Why Correct: The shared disk stores executables, configuration files, and backups, with a mix of read and write operations. It’s less performance-critical than data or log disks. No Caching ensures consistent I/O performance directly to the premium SSD, maximizing throughput for both reads and writes without cache dependency. It minimizes potential data loss by committing writes to persistent storage immediately, aligning with production reliability needs. SAP and Azure guidelines typically recommend “None” for shared volumes to avoid caching overhead and ensure durability.
38
DRAG DROP - You have an Azure subscription that contains a D-series virtual machine named SQL1. You plan to deploy an SAP landscape on Azure that will have Microsoft SQL Server installed. You install a SQL server on SQL1 and place databases and logs on separate disks. You need to configure caching for the disks. Which type of cache should you configure for each disk? To answer, drag the appropriate cache types to the correct disks. Each cache type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Cache Answer Area None Data disk: [________] Read-only Log disk: [________] Read/Write Write Accelerator
Correct Configuration in Answer Area: Data disk: [Read-only] Log disk: [None] Why Correct: Data disk (Read-only): Maximizes read throughput for SAP queries by caching reads, improving performance. Ensures write durability by bypassing cache, aligning with SQL Server and SAP reliability needs. Log disk (None): Guarantees write durability by committing directly to disk, minimizing data loss risk. Provides adequate write performance via premium SSDs, sufficient for SAP log workloads on D-series VMs. SAP on Azure Context: SQL Server is a supported database for SAP Business Suite (e.g., ECC), and Azure caching must align with SAP’s performance and consistency requirements. D-series VMs limit options (no Write Accelerator), making Read-only and None the optimal choices. AZ-120 Alignment: The exam tests disk configuration for SAP databases on Azure, emphasizing caching impacts on performance and durability.
39
You have an SAP on Azure production landscape that is hosted on Standard M-series virtual machines. You plan to expand the storage on the virtual machines. Which type of disk can be expanded without causing downtime? A. Ultra B. Standard SSD C. Premium SSD v2 D. Premium SSD v1
Correct Answer: D. Premium SSD v1 Why It’s Correct: Online Resizing: Premium SSD v1 (commonly just “Premium SSD”) supports expanding disk size without stopping the VM. This is done via the Azure portal (Disks > Size + performance > change size) or CLI (az disk update --size-gb), and the change takes effect immediately without detaching the disk. SAP Production Compatibility: Premium SSD v1 is the default high-performance disk type for SAP landscapes on Azure, certified for M-series VMs running SAP HANA or application servers. It’s widely used for data, log, and shared disks in SAP deployments. No Downtime: The ability to resize online ensures the SAP production landscape remains operational, critical for minimizing disruption in a production environment. AZ-120 Context: The exam tests knowledge of Azure storage options for SAP workloads, including operational features like resizing. Premium SSD v1’s online expansion is a key differentiator for production scenarios.
40
You have an Azure subscription. The subscription contains a virtual machine named VM1 that runs SAP HANA and a user named User1. User1 is assigned the Virtual Machine Contributor role for VM1. You need to prevent User1 from placing VM1 in the Stopped (deallocated) state. User1 must be able to restart the operating system on VM1. What should you do? A. Create a resource lock on VM1. B. Assign an Azure Policy definition to the resource group that contains VM1. C. Assign User1 the Virtual Machine User Login role for VM1. D. Configure the Desired State Configuration (DSC) extension on VM1.
Correct Answer: A. Create a resource lock on VM1 Why It’s Correct: Prevents Deallocation: A ReadOnly lock on VM1 blocks User1 from performing the Stop (deallocated) action, as it’s a write operation on the VM resource. This overrides the Virtual Machine Contributor role’s permissions for deallocation (Microsoft.Compute/virtualMachines/deallocate/action). Allows OS Restart: The lock permits OS-level restarts via the Azure portal’s “Restart” button or in-VM commands (e.g., shutdown -r on Linux for SAP HANA), as these actions don’t alter the VM’s Azure resource state. User1’s Contributor role retains Microsoft.Compute/virtualMachines/restart/action, which is compatible with a ReadOnly lock for rebooting the VM. Simplicity: Applying a lock is a straightforward action in the Azure portal (VM > Locks > Add ReadOnly lock), requiring minimal effort compared to crafting a policy or reconfiguring roles. SAP HANA Context: Preventing deallocation is critical for SAP HANA VMs to maintain availability, while OS restarts are routine maintenance tasks. AZ-120 Alignment: The exam tests resource management for SAP on Azure, including locks to enforce operational controls.
41
You have an instance of Azure SAP HANA (Large Instances) named HLI1 that has storage volume snapshots enabled. You need to monitor the storage usage of HLI1. The solution must monitor the following: The number of stored snapshots The storage used by the snapshots Which Linux OS command should you use? A. hdbuserstore B. ls C. du D. sapcontrol
Correct Answer: C. du Why It’s Correct: Storage Usage: du directly measures the size of directories or files (e.g., du -sh /hana/snapshot/* totals snapshot storage), meeting the requirement to monitor storage used. Snapshot Count: While du alone doesn’t count files, combining it with ls or find (e.g., ls /hana/snapshot | wc -l) counts snapshots, fulfilling the requirement with basic Linux tools. HLI Relevance: On SAP HANA Large Instances, storage snapshots are managed at the volume level, and du can assess usage in the mounted file systems (e.g., /hana/data, /hana/log, or snapshot directories), assuming access is configured. AZ-120 Context: The exam tests practical Linux commands for SAP HANA administration on Azure, and du is a standard tool for storage monitoring, unlike SAP-specific utilities like hdbuserstore or sapcontrol for this purpose.
42
This question requires that you evaluate the underlined text to determine if it is correct. When deploying SAP HANA to an Azure virtual machine, you can enable Write Accelerator to reduce the latency between the SAP application servers and the database layer. Instructions: Review the underlined text. If it makes the statement correct, select `No change is needed`. If the statement is incorrect, select the answer choice that makes the statement correct. A. No change is needed B. install the Mellanox driver C. start the NIPING service D. enable Accelerated Networking
Correct Answer: D. Enable Accelerated Networking Why This Is Correct: Reasoning: The statement’s phrasing—"reduce the latency between the SAP application servers and the database layer"—suggests network latency, which Write Accelerator does not address. Write Accelerator reduces disk write latency on the SAP HANA VM, not the communication latency between components. Enable Accelerated Networking corrects the statement by replacing Write Accelerator with a feature that actually reduces network latency between the application servers and the database layer. Accelerated Networking is supported on Azure VMs (e.g., M-series for SAP HANA) and is a valid optimization for SAP workloads, aligning with AZ-120 exam objectives.
43
You have an SAP landscape on Azure that contains the virtual machines shown in the following table. You need to ensure that the Application Server role is available if a single Azure datacenter fails. Name Role Azure Availability Zone in East US SAPAPP1 Application Server Zone 1 SAPAPP2 Application Server Zone 2 What should you include in the solution? A. Azure Basic Load Balancer B. Azure Load Balancer Standard C. Azure Virtual WAN D. Azure Application Gateway v1
Correct Answer: B. Azure Load Balancer Standard Why This Is Correct: Reasoning: Azure Load Balancer Standard is the appropriate solution for ensuring high availability of the SAP Application Server role across Availability Zones in East US. It supports zone-redundant operation, meaning it can distribute traffic to SAPAPP1 (Zone 1) and SAPAPP2 (Zone 2) and automatically failover to the surviving VM if one zone’s datacenter fails. This meets the requirement of availability despite a single datacenter failure. For SAP landscapes, Standard Load Balancer is commonly used to provide HA for application servers, as outlined in Azure’s SAP deployment documentation and AZ-120 exam objectives.
44
You have an SAP landscape on Azure. You deploy an SAP Web Dispatcher named web1. You need to confirm that web1 can support 1,500 users. What should you use? A. Apache JMeter B. lometer C. ABAPMeter D. FIO
Correct Answer: A. Apache JMeter Why It’s Correct: Load Testing Capability: JMeter simulates 1,500 concurrent users by generating HTTP/HTTPS requests to web1, directly testing its capacity to handle the specified load. SAP Web Dispatcher Relevance: As a web proxy, web1’s performance is measured by its ability to route user requests efficiently, which JMeter can validate through metrics like throughput and response time. Azure and SAP Fit: JMeter is platform-agnostic and commonly used in Azure SAP deployments to stress-test web components, ensuring they meet scalability requirements.
45
You have an SAP landscape on Azure that contains the virtual machines shown in the following table. You need to ensure that the Application Server role is available if a single Azure datacenter fails. Name Role Azure Availability Zone in East US SAPAPP1 Application Server Zone 1 SAPAPP2 Application Server Zone 2 What should you include in the solution? A. Azure Virtual WAN B. Azure Basic Load Balancer C. Azure Application Gateway v2 D. Azure AD Application Proxy
Final Answer: B. Azure Basic Load Balancer The correct answer is None of the provided options directly fit, but the closest and most appropriate choice from the given list, based on standard Azure architecture for high availability, is B. Azure Basic Load Balancer. However, I’ll explain why this is not a perfect fit and what the ideal solution would be. Why B is the Closest (but Imperfect) Answer: From the given options, Azure Basic Load Balancer is the only one related to load balancing, which is a key component of ensuring availability. However, in a real-world SAP on Azure deployment, you would use the Azure Standard Load Balancer because: It supports Availability Zones, ensuring traffic can be routed to the surviving VM if one zone fails. It provides a higher level of reliability and scalability compared to the Basic Load Balancer. SAP landscapes on Azure typically require zone-redundant load balancing for high availability, as per Microsoft’s best practices.
46
HOTSPOT You have an SAP landscape on Azure. You plan to deploy a new SAP application server by using an Azure Resource Manager template. You need to ensure that all new servers are deployed with Azure Disk Encryption enabled. How should you complete the relevant component of the template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. "resources": [ { "type": "Microsoft.Compute/virtualMachines/", "name": "[concat(parameters('vmName'),'/DiskEncryption')]", "location": "[parameters('location')]", "apiVersion": "2017-03-30", "properties": { "publisher": "Microsoft.Azure.Security", "type": "", "typeHandlerVersion": "2.2", "autoUpgradeMinorVersion": true, "forceUpdateTag": "2", "settings": { "EncryptionOperation": "EnableEncryption", "KeyVaultURL": "[reference(parameters('keyVaultResourceID'),'2016-10-01').vaultUri]", "KeyVaultResourceId": "[parameters('keyVaultResourceID')]", "KeyEncryptionKeyURL": "[parameters('keyEncryptionKeyURL')]", "KeyVaultResourceId": "[parameters('keyVaultResourceID')]", "KeyEncryptionAlgorithm": "RSA-OAEP", "VolumeType": "All", "ResizeOSDisk": false } } } ] Dropdown options for "type": "Disk" "KeyVault" "Extensions" "AzureDiskEncryption" Dropdown Selection1: Dropdown Selection2:
Final Answers: Dropdown Selection1: extensions Full type: "Microsoft.Compute/virtualMachines/extensions" Dropdown Selection2: AzureDiskEncryption Extension type: "AzureDiskEncryption" Why Correct: Dropdown Selection1 (extensions): ADE is deployed as a VM extension, not a standalone resource. The type "Microsoft.Compute/virtualMachines/extensions" correctly nests the extension under the VM, aligning with ARM template syntax for enabling features like encryption. The "name" concatenation (e.g., "VM1/DiskEncryption") and "properties" section (extension settings) confirm this is an extension resource. Dropdown Selection2 (AzureDiskEncryption): "AzureDiskEncryption" is the official extension type for enabling ADE on Windows VMs, as published by "Microsoft.Azure.Security". It matches the settings (e.g., "EncryptionOperation": "EnableEncryption", Key Vault integration). For Linux (possible in SAP landscapes), it would be "AzureDiskEncryptionForLinux", but the options list only "AzureDiskEncryption", suggesting a Windows-focused scenario typical for SAP app servers. SAP on Azure: Ensuring disk encryption for new SAP application servers enhances security, a common requirement in production landscapes, and ADE is a supported feature tested in AZ-120.
47
You plan to deploy an SAP production landscape that uses SAP HANA databases on Azure. You need to configure the storage infrastructure to support the SAP HANA deployment. The solution must meet the SAP issued requirements for data throughput and I/O. How should you configure the storage? A. RAID 1 B. RAID 5 C. RAID 6 D. RAID 0
Final Answer: D. RAID 0 Correct Solution: For SAP HANA on Azure, Microsoft and SAP recommend using RAID 0 (striping) for the storage configuration of the /hana/data and /hana/log volumes when combining multiple disks. RAID 0 stripes data across multiple disks to maximize throughput and IOPS by aggregating the performance of individual disks, which is critical for SAP HANA’s high-performance requirements. Azure’s virtual machines (e.g., M-series VMs certified for SAP HANA) typically use Premium SSDs or Ultra Disks, and RAID 0 is implemented via a software RAID (e.g., Linux LVM or mdadm) to combine these disks into a single logical volume. SAP HANA does not rely on RAID for redundancy at the storage level because: High availability is achieved through SAP HANA’s own mechanisms (e.g., system replication or host auto-failover). Azure provides redundancy at the infrastructure level (e.g., Locally Redundant Storage or Zone-Redundant Storage). Thus, the focus for SAP HANA storage configuration is on performance (throughput and I/O) rather than redundancy, making RAID 0 the appropriate choice. Why D is Correct: SAP’s requirements for HANA on Azure, as outlined in SAP Note 1943937 (HANA Guidelines for Azure) and Microsoft’s AZ-120 exam objectives, emphasize storage performance over redundancy at the disk level. For example: The /hana/data volume requires high read/write throughput for in-memory data persistence. The /hana/log volume requires low-latency, high-IOPS storage for transaction logs, often paired with Azure Write Accelerator on M-series VMs. RAID 0 is the standard configuration when using multiple Azure Premium SSDs or Ultra Disks to achieve these performance targets. For instance, Microsoft’s SAP HANA on Azure deployment guides recommend combining 2-4 Premium SSDs in a RAID 0 array using LVM or mdadm to meet SAP’s KPIs (e.g., 250 MB/s throughput for /hana/data and 100 MB/s for /hana/log).
48
HOTSPOT You need to provide the Azure administrator with the values to complete the Azure Resource Manager template. Which values should you provide for diskCount, StorageAccountType, and domainName? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer Area diskCount: 0 1 2 4 storageAccountType: Premium_LRS Standard_GRS Standard_ZRS domainName: ad.contoso.com ad.contoso.onmicrosoft.com contoso.com contoso.onmicrosoft.com
Box 1: 4 Scenario: the Azure Resource Manager template that will be used to provision the production application servers. Ensure that each production application server has four 1-TB data disks. Box 2: Standard_LRS Scenario: Minimize costs whenever possible. Box 3: contoso.onmicrosoft.com The network contains an on-premises Active Directory domain named ad.contoso.com. The Initial domain: The default domain (onmicrosoft.com) in the Azure AD Tenant. For example, contoso.onmicrosoft.com.
49
You have an Azure subscription and an Enterprise Agreement (EA). You plan to deploy an SAP on Azure production landscape that will contain the following virtual machines: One M-series virtual machine with 128 cores 15 E-series virtual machines with a total of 300 cores 10 D-series virtual machines with a total of 160 cores During the deployment of the E-series virtual machines, you receive the following error message. Operation results in exceeding quota limits of Core. You need to ensure you can complete the E-series virtual machine deployment. The solution must meet the following requirements: Maintain the performance of the SAP landscape. Minimize administrative effort. Minimize costs. What should you do? A. Convert the subscription to Pay-As-You-Go (PAYG). B. Create a second subscription and split the virtual machines evenly between both subscriptions. C. Resize the D-series and E-series virtual machines. D. Request a quota increase for the Azure region.
Final Answer: D. Request a quota increase for the Azure region Why D is Correct: Performance: Keeps the original VM sizes (M-series, E-series, D-series) intact, ensuring the SAP landscape meets its performance needs (e.g., SAP HANA on M-series and application servers on E-series/D-series). Administrative Effort: Requesting a quota increase is a straightforward process in Azure (via the "Quotas" section of the portal or a support ticket), requiring less effort than resizing VMs or managing multiple subscriptions. Costs: Stays within the EA pricing model, avoiding the higher rates of PAYG or the complexity of additional subscriptions.
50
You plan to implement a highly available SAP HANA deployment by using two Azure virtual machines that run SUSE Linux Enterprise Server (SLES). You need to create an Azure Fence agent STONITH block device (SBD). What should you do first? A. Create a storage account. B. Create a system-assigned managed identity. C. Create an application registration in Azure Active Directory (Azure AD). D. Create a user-assigned managed identity.
Final Answer: A. Create a storage account Why A is Correct: The Azure Fence agent for SBD in an SAP HANA HA deployment on Azure relies on Azure Blob Storage as the fencing mechanism, replacing traditional on-premises SBD hardware. According to Microsoft’s official documentation for SAP HANA high availability on Azure (e.g., “High availability of SAP HANA on Azure VMs” and SUSE’s SLES HA guides), the process begins with: Creating a storage account (typically a general-purpose v2 account). Setting up a blob container and a small blob (e.g., 1 MB) to act as the SBD. Configuring the cluster nodes (via Pacemaker) to use the Azure Fence agent, which requires access to the blob (often via a managed identity or SAS token).
51
You are deploying SAP on Azure. The database server will use SAP HANA. The application servers will run Windows Server. You need to test network latency and throughput between the frontend SAP servers and the database servers. Which three tools can you use to achieve the goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. HCMT B. SockPerf C. IOMeter D. NIPING E. latte.exe
Correct Answers: A. HCMT D. NIPING E. latte.exe Why Correct: A. HCMT: Validates network performance for SAP HANA deployments, including latency and throughput between HANA and app servers. SAP-specific and Azure-supported, ensuring compliance with HANA KPIs (e.g., <1 ms latency for app-to-DB). D. NIPING: SAP’s dedicated network testing tool, perfect for measuring latency and throughput in an SAP landscape (e.g., Windows app servers to HANA). Simple to use (e.g., niping -s on HANA, niping -c on app server) and tailored for SAP protocols. E. latte.exe: Microsoft’s Azure-focused tool for Windows, measures latency and throughput effectively (e.g., latte.exe -c on app server, latte.exe -s or nc on HANA). Lightweight and practical for cross-platform testing in Azure.
52
HOTSPOT You have an instance of SAP HANA on Azure (Large Instances) named HLI1. You plan to deploy Azure virtual machines. The virtual machines will host application servers that will access the database on HLI1. You need to minimize latency between the application servers and HLI1. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Configure the virtual machines: In a single proximity placement group To each have two network adapters To use increased maximum transmission unit (MTU) sizes Configure the connection between HL1 and the application servers to use: Bidirectional Forwarding Detection (BFD) Express Route Direct ExpressRoute FastPath
Final Answers: Configure the virtual machines: In a single proximity placement group Configure the connection between HLI1 and the application servers to use: ExpressRoute FastPath Correct Answer: In a single proximity placement group Explanation: In a single proximity placement group (PPG): Correct. A proximity placement group ensures that Azure VMs are physically located as close as possible to each other and to specific Azure resources (like HLI) within the same data center or region. For SAP HANA on Azure, placing the application server VMs in a PPG aligned with the HLI instance minimizes network latency by reducing the physical distance between the VMs and the HLI bare-metal server. This is a recommended practice in Microsoft’s SAP on Azure documentation for latency-sensitive workloads. To each have two network adapters: Incorrect. Adding multiple network adapters might improve throughput or provide redundancy, but it doesn’t inherently minimize latency. Latency is more about physical proximity and network path efficiency than the number of adapters. To use increased maximum transmission unit (MTU) sizes: Incorrect. Increasing the MTU size (e.g., enabling jumbo frames) can improve throughput by allowing larger packets, potentially reducing overhead. However, it doesn’t directly address latency, which is the time it takes for data to travel between the VMs and HLI. This is a secondary optimization, not the primary latency solution. Correct Answer: ExpressRoute FastPath Explanation: Bidirectional Forwarding Detection (BFD): Incorrect. BFD is a protocol used to detect network failures quickly, improving failover times in routing scenarios (e.g., with ExpressRoute). While it enhances reliability, it doesn’t directly reduce latency for data transfer between HLI and VMs. ExpressRoute Direct: Incorrect. ExpressRoute Direct is a premium ExpressRoute option that provides dedicated, physical port connectivity (e.g., 10 Gbps or 100 Gbps) between on-premises infrastructure and Azure. However, HLI and the VMs are both within Azure, and HLI is already connected to the Azure backbone via ExpressRoute-like connectivity by default. ExpressRoute Direct is irrelevant here as it’s for on-premises-to-Azure scenarios, not intra-Azure latency optimization. ExpressRoute FastPath: Correct. ExpressRoute FastPath is an Azure feature that reduces network latency by bypassing the ExpressRoute gateway for traffic between Azure VMs and resources like HLI. Normally, traffic from VMs to HLI might pass through a virtual network gateway, adding latency. FastPath optimizes the network path, sending traffic directly over the Azure backbone, which is critical for minimizing latency in SAP HANA deployments.
53
You plan to migrate an SAP environment to Azure. You need to create a design to facilitate end-user access to SAP applications over the Internet, while restricting user access to the virtual machines of the SAP application servers. What should you include in the design? A. Configure a public IP address for each SAP application server B. Deploy an internal Azure Standard Load Balancer for incoming connections C. Use an SAP Web Dispatcher to route all incoming connections D. Configure point-to-site VPN connections for each user
Correct Answer: C. Use an SAP Web Dispatcher to route all incoming connections Why it’s correct: The SAP Web Dispatcher acts as an intermediary between end users and the SAP application servers, ensuring that users access SAP applications (e.g., via web protocols) without needing direct access to the VMs. It can be deployed in a secure Azure architecture, such as behind an Azure Application Gateway or with a public IP, while the SAP application servers remain in a private subnet with no public IPs. This aligns with SAP’s recommended architecture for exposing applications to the Internet and Azure’s security best practices (e.g., using a DMZ or front-end proxy). For the AZ-104 exam, this option demonstrates an understanding of application-level routing and network security, key topics in Azure administration.
53
You have a two-node SAP HANA cluster that is hosted on Azure virtual machines. Each cluster node uses Azure NetApp Files to store database files. The nodes replicate synchronously by using HANA system replication. You need to implement a backup solution for the HANA databases. The solution must meet the following requirements: Be cluster aware. Support the use of snapshots. Ensure that backups are application consistent. What should you include in the solution? A. AzAcSnap B. the Az.NetAppFiles PowerShell module C. Microsoft Azure Backup Server (MABS) D. the azure_hana_backup command
Final Answer: A. AzAcSnap Why A is Correct: AzAcSnap is purpose-built for SAP HANA on Azure and addresses all the specified requirements: Cluster awareness: It works with SAP HANA HA setups, including two-node clusters with synchronous replication via HSR. It can coordinate with both the primary and secondary nodes to ensure proper backup handling. Snapshot support: It leverages Azure NetApp Files’ snapshot technology, which is fast and efficient, minimizing downtime and storage overhead. Application consistency: AzAcSnap interfaces with SAP HANA to create a savepoint or flush the database to disk, ensuring the snapshot captures a consistent state of the database. Microsoft’s documentation for SAP HANA on Azure (e.g., “Back up SAP HANA databases on Azure VMs”) and the AZ-120 exam objectives highlight AzAcSnap as the preferred tool for snapshot-based, application-consistent backups when using ANF. For example, it runs a sequence like: Connects to SAP HANA via its backup interface. Quiesces the database to ensure consistency. Triggers an ANF snapshot. Releases the database to resume operations. This process is ideal for the described setup, as it integrates seamlessly with both SAP HANA and Azure NetApp Files.
54
HOTSPOT You have an on-premises SAP environment. Backups are performed by using tape backups. There are 50 TB of backups. A Windows file server has BMP images of checks used by SAP Finance. There are 9 TB of images. You need to recommend a method to migrate the images and the tape backups to Azure. The solution must maintain continuous replication of the images. What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Tape backups: AzCopy Azure File Sync Azure Databox Azure Storage Explorer File server: AzCopy Azure File Sync Azure Databox Azure Storage Explorer
Final Answer for Hotspot: Tape backups: Azure Data Box File server: Azure File Sync Correct Answer: Azure Data Box Why it’s correct: The 50 TB of tape backups represents a large, offline dataset. Tape backups need to be extracted to a staging area (e.g., a local server or disk), then transferred to Azure. Azure Data Box is designed for such scenarios, allowing you to ship the data to an Azure data center for upload to Blob Storage or Azure Files. One Data Box can handle up to 80 TB, so 50 TB fits within a single device. Continuous replication isn’t required for backups, so a one-time transfer solution like Data Box is ideal. For AZ-104, Data Box is a key service for large-scale data migration, aligning with exam objectives. Correct Answer: Azure File Sync Why it’s correct: The Windows file server hosts 9 TB of BMP images, and the solution requires continuous replication to Azure. Azure File Sync is purpose-built for this: it syncs files from an on-premises Windows file server to Azure Files, providing real-time replication after the initial upload. This ensures that changes to the on-premises images (e.g., new checks added by SAP Finance) are reflected in Azure. The initial 9 TB can be uploaded efficiently (e.g., over the Internet or seeded with Data Box), and then Azure File Sync takes over for ongoing sync. This hybrid approach is a common pattern in Azure for file-based workloads and is tested in AZ-104.
55
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an SAP production landscape on-premises and an SAP development landscape on Azure. You deploy a network virtual appliance to act as a firewall between the Azure subnets and the on-premises network. You need to ensure that all traffic is routed through the network virtual appliance. Solution: You deploy an Azure Standard Load Balancer. Does this meet the goal? A. Yes B. No
Final Answer: B. No Why the Solution Fails: Traffic Routing: The Standard Load Balancer does not modify routing tables or force traffic from Azure subnets to the on-premises network (or vice versa) to go through the NVA. Without UDRs, traffic might bypass the NVA entirely, depending on Azure’s default routing behavior. Firewall Functionality: The NVA is acting as a firewall, implying it must inspect and filter all traffic. A load balancer doesn’t ensure this; it’s meant to distribute traffic, not enforce a mandatory path through a single choke point. Correct Approach: To achieve the goal, you would typically: Deploy the NVA in a dedicated subnet. Configure UDRs on the Azure subnets to route traffic (e.g., 0.0.0.0/0 or specific CIDR ranges) to the NVA’s internal IP. Optionally, use a Standard Load Balancer if you have multiple NVA instances for HA, but this is secondary to routing.
56
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements Yes No Enabling Accelerated Networking on an SAP application server will decrease CPU usage. ( ) ( ) Enabling Accelerated Networking on an SAP application server will increase jitter. ( ) ( ) You can enable Accelerated Networking on any Azure virtual machine. ( ) ( )
Correct Answers: Statement 1: Yes (Enabling Accelerated Networking on an SAP application server will decrease CPU usage.) Statement 2: No (Enabling Accelerated Networking on an SAP application server will not increase jitter.) Statement 3: No (You cannot enable Accelerated Networking on any Azure virtual machine.)
57
HOTSPOT Your on-premises network contains SAP and non-SAP applications. You have JAVA-based SAP systems that use SPNEGO for single-sign on (SSO) authentication. Your external portal uses multi-factor authentication (MFA) to authenticate users. You plan to extend the on-premises authentication features to Azure and to migrate the SAP applications to Azure. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements Yes No Azure Active Directory (Azure AD) pass-through authentication can be used to enable MFA for on-premises users. ( ) ( ) Azure Active Directory (Azure AD) password hash synchronization ensures that users can use on their on-premise credentials to authenticate to ABAP-based SAP systems on Azure. ( ) ( ) Active Directory Federation Services (AD FS) can be used to enable MFA for on-premises users. ( ) ( )
Final Answers: Statement 1: Yes - Azure AD pass-through authentication can be used to enable MFA for on-premises users. Statement 2: No - Azure AD password hash synchronization does not ensure users can authenticate to ABAP-based SAP systems on Azure with on-premises credentials. Statement 3: Yes - Active Directory Federation Services (AD FS) can be used to enable MFA for on-premises users. Statement 1: "Azure Active Directory (Azure AD) pass-through authentication can be used to enable MFA for on-premises users." Answer: Yes Explanation: Azure AD Pass-through Authentication (PTA) allows users to sign in to Azure AD-integrated applications using their on-premises Active Directory (AD) credentials. It works by installing an agent on an on-premises server that validates credentials directly against the on-premises AD, bypassing the need to store password hashes in Azure AD. MFA Integration: Azure AD supports Multi-Factor Authentication (MFA) as an additional security layer for any authentication method, including PTA. When PTA is configured, on-premises users can authenticate to Azure AD (e.g., for cloud-based resources or migrated SAP applications), and MFA can be enforced via Azure AD Conditional Access policies. Context: The statement specifies "on-premises users," meaning users with on-premises AD accounts. PTA enables these users to access Azure resources with their existing credentials, and MFA can be applied in Azure AD, fulfilling the goal of extending authentication features to Azure. Why Yes: PTA supports seamless authentication for on-premises users to Azure, and Azure AD MFA can be layered on top, making this statement true. Statement 2: "Azure Active Directory (Azure AD) password hash synchronization ensures that users can use their on-premise credentials to authenticate to ABAP-based SAP systems on Azure." Answer: No Explanation: Azure AD Password Hash Synchronization (PHS) syncs password hashes from on-premises AD to Azure AD using Azure AD Connect. This allows users to use the same credentials (username and password) for both on-premises and Azure AD-integrated applications. ABAP-based SAP Systems: SAP systems running on the ABAP stack (e.g., SAP ECC, S/4HANA) typically use their own user management (e.g., SU01) or integrate with on-premises AD via LDAP or Kerberos (e.g., SPNEGO for SSO). However, ABAP systems do not natively integrate with Azure AD for direct authentication using synced password hashes. Instead, they rely on protocols like Kerberos or SAML, which require additional configuration (e.g., AD FS or SAP SSO setup). Context: The scenario mentions JAVA-based SAP systems using SPNEGO (Kerberos-based SSO), but this statement refers to ABAP-based systems. PHS enables credential reuse for Azure AD-integrated apps (e.g., Office 365), but ABAP systems on Azure would typically authenticate against on-premises AD or a federated identity provider (e.g., AD FS), not directly against Azure AD password hashes without custom integration. Why No: PHS does not natively enable ABAP-based SAP systems to authenticate users using Azure AD-synced credentials. Additional setup (e.g., federation or SAP-specific SSO configuration) is required, making this statement false. Statement 3: "Active Directory Federation Services (AD FS) can be used to enable MFA for on-premises users." Answer: Yes Explanation: Active Directory Federation Services (AD FS) is an on-premises identity provider that federates with Azure AD (or other systems) to provide SSO and authentication services. It uses protocols like SAML, WS-Federation, or OpenID Connect to enable users to access cloud resources with their on-premises AD credentials. MFA Integration: AD FS supports MFA natively (e.g., via Azure MFA Adapter or third-party MFA providers). When configured, AD FS can enforce MFA for on-premises users accessing federated applications, including those in Azure. This extends MFA to users with on-premises AD accounts, aligning with the goal of extending authentication features to Azure. Context: The external portal already uses MFA, and AD FS can integrate with Azure AD to provide a consistent MFA experience for on-premises users accessing migrated SAP applications or other Azure resources. Why Yes: AD FS can enforce MFA for on-premises users authenticating to Azure or other federated systems, making this statement true.
58
You have an Azure virtual machine that runs SUSE Linux Enterprise Server (SLES). The virtual machine hosts a highly available deployment of SAP HANA. You need to validate whether Accelerated Networking is operational for the virtual machine, What should you use? A. ethtool B. netsh C. iometer D. fio
Final Answer: A. ethtool Correct Answer: A. ethtool Why it’s correct: On a Linux VM like SLES, ethtool is the standard tool to inspect network interface details. To validate Accelerated Networking, you can run commands like: ethtool -i eth0 to check the driver (expecting hv_netvsc or similar with SR-IOV support). ethtool -k eth0 to verify offload features enabled by Accelerated Networking. Microsoft’s Azure documentation for Linux VMs (including SLES) recommends ethtool to confirm Accelerated Networking status, making it the most precise and relevant choice. For SAP HANA’s high-availability setup, ensuring Accelerated Networking is operational is critical for low-latency networking, and ethtool provides the necessary visibility into the NIC configuration. This aligns with AZ-104 exam objectives around VM networking and troubleshooting.
59
DRAG DROP You are validating SLES 15 for SAP Applications 15 running SAP HANA on Azure (Large Instances) deployment. You need to ensure that sapconf is installed and the kernel parameters are set appropriately for the active profile. How should you complete the commands? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Values sap-ase sap-bobj sapconf sap-hana sap-netweaver saptune tuned Answer Area osprompt> more /etc/sysconfig/ [ Value ] osprompt> more /usr/lib/tuned/ [ Value ] /tuned.conf
Final Answers: osprompt> more /etc/sysconfig/[Value]: sapconf osprompt> more /usr/lib/tuned/[Value]/tuned.conf: sap-hana Why These Are Correct: sapconf for /etc/sysconfig/: The question explicitly asks to ensure sapconf is installed, and checking /etc/sysconfig/sapconf verifies its configuration file exists, which is a reasonable step in a validation process. While sapconf is legacy in SLES 15, it might still be present or referenced in older deployment guides or mixed environments, and the question’s phrasing suggests it’s part of the scope. sap-hana for /usr/lib/tuned/: For SAP HANA on SLES 15, saptune applies the sap-hana profile, which is stored as /usr/lib/tuned/sap-hana/tuned.conf. This file contains the kernel parameters (e.g., vm.max_map_count, kernel.shmmni) optimized for SAP HANA, meeting the requirement to ensure parameters are set appropriately for the active profile.
60
Case Study This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Overview Litware, Inc. is an international manufacturing company that has 3,000 employees. Litware has two main offices. The offices are located in Miami, FL, and Madrid, Spain. Existing Environment Infrastructure Litware currently uses a third-party provider to host a datacenter in Miami and a disaster recovery datacenter in Chicago, IL. The network contains an Active Directory domain named litware.com. Litware has two third-party applications hosted in Azure. Litware already implemented a site-to-site VPN connection between the on-premises network and Azure. SAP Environment Litware currently runs the following SAP products: Enhancement Pack6 for SAP ERP Central Component 6.0 (SAP ECC 6.0) SAP Extended Warehouse Management (SAP EWM) SAP Supply Chain Management (SAP SCM) SAP NetWeaver Process Integration (PI) SAP Business Warehouse (SAP BW) SAP Solution Manager All servers run on the Windows Server platform. All databases use Microsoft SQL Server. Currently, you have 20 production servers. You have 30 non-production servers including five testing servers, five development servers, five quality assurance (QA) servers, and 15 pre-production servers. Currently, all SAP applications are in the litware.com domain. Problem Statements The current version of SAP ECC has a transaction that, when run in batches overnight, takes eight hours to complete. You confirm that upgrading to SAP Business Suite on HANA will improve performance because of code changes and the SAP HANA database platform. Litware is dissatisfied with the performance of its current hosted infrastructure vendor. Litware experienced several hardware failures and the vendor struggled to adequately support its 24/7 business operations. Requirements Business Goals Litware identifies the following business goals: Increase the performance of SAP ECC applications by moving to SAP HANA. All other SAP databases will remain on SQL Server. Move away from the current infrastructure vendor to increase the stability and availability of the SAP services. Use the new Environment, Health and Safety (EH&S) in Recipe Management function. Ensure that any migration activities can be completed within a 48-hour period during a weekend. Planned Changes Litware identifies the following planned changes: Migrate SAP to Azure. Upgrade and migrate SAP ECC to SAP Business Suite on HANA Enhancement Pack 8. Technical Requirements Litware identifies the following technical requirements: Implement automated backups. Support load testing during the migration. Identify opportunities to reduce costs during the migration. Continue to use the litware.com domain for all SAP landscapes. Ensure that all SAP applications and databases are highly available. Establish an automated monitoring solution to avoid unplanned outages. Remove all SAP components from the on-premises network once the migration is complete. Minimize the purchase of additional SAP licenses. SAP HANA licenses were already purchased. Ensure that SAP can provide technical support for all the SAP landscapes deployed to Azure. You are evaluating which migration method Litware can implement based on the current environment and the business goals. Which migration method will cause the least amount of downtime? A. Migrate SAP ECC to SAP Business Suite in HANA, and then migrate SAP to Azure. B. Use Near-Zero Downtime (NZDT) to migrate to SAP HANA and Azure during the same maintenance window. C. Use the Database Migration Option (DMO) to migrate to SAP HANA and Azure during the same maintenance window. D. Migrate SAP to Azure, and then migrate SAP ECC to SAP Business Suite on HANA.
Final Answer C. Use the Database Migration Option (DMO) to migrate to SAP HANA and Azure during the same maintenance window Why DMO to SAP HANA and Azure in One Window Is Correct SAP Database Migration Option (DMO): DMO, part of the Software Update Manager (SUM), combines system updates (e.g., upgrading SAP ECC to Business Suite on HANA EHP8) with database migration (e.g., from SQL Server to SAP HANA). DMO with System Move: Allows migration to a new target environment (Azure) during the process, exporting data from on-premises and importing it into Azure VMs. Single Maintenance Window: DMO can perform the database migration (SQL Server to HANA) and infrastructure migration (on-premises to Azure) in one step, minimizing downtime by streamlining the process. Downtime Minimization: DMO optimizes downtime using techniques like parallel export/import and incremental data transfer, allowing much of the data to be exported while the source system is still running. The final cutover (stopping the source system, final sync, and starting the target) fits within a 48-hour weekend window, especially for SAP ECC (database size not specified but manageable with proper planning). Compared to multi-step migrations, DMO reduces cumulative downtime by combining upgrade and migration tasks. Alignment with Requirements: Performance: Moves ECC to SAP Business Suite on HANA, addressing the 8-hour batch issue. Azure Migration: Completes the move to Azure VMs, replacing the third-party vendor. 48-Hour Window: Feasible with DMO’s efficiency, as validated by SAP and Microsoft for SAP workloads. Cost: Uses existing HANA licenses, avoiding additional purchases. HA and Monitoring: Azure supports HA (e.g., Availability Zones) and monitoring (Azure Monitor), applied post-migration.
61
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an SAP production landscape on-premises and an SAP development landscape on Azure. You deploy a network virtual appliance to act as a firewall between the Azure subnets and the on-premises network. You need to ensure that all traffic is routed through the network virtual appliance. Solution: You configure a user-defined route table. Does this meet the goal? A. Yes B. No
Final Answer: A. Yes Why the Solution Succeeds: Traffic Routing Control: UDRs provide granular control over network traffic, allowing you to mandate that all traffic between Azure subnets and the on-premises network passes through the NVA. For instance, you can set the NVA as the next hop for traffic destined to the on-premises IP range and ensure return traffic is handled appropriately (e.g., via NVA configuration or gateway routing). Firewall Functionality: The NVA acts as a firewall, inspecting and filtering traffic. UDRs ensure that no traffic bypasses the NVA, fulfilling the requirement. Standard Practice: This is a common Azure networking pattern for SAP deployments, as outlined in Microsoft’s SAP on Azure documentation and the AZ-120 exam objectives. For example, the NVA is typically deployed in a dedicated subnet, and UDRs are applied to adjacent subnets to enforce the traffic path.
62
HOTSPOT You have an SAP production landscape on Azure that contains the virtual machines shown in the following table. Name Location Application HANA1 East US SAP HANA 2.0 HANA2 East US SAP HANA 2.0 HANA3 South Central US SAP HANA 2.0 App1 East US SAP Web Dispatcher App2 East US SAP Web Dispatcher You configure HANA system replication as shown in the following table. Source Destination Mode HANA1 HANA2 Sync HANA2 HANA3 Sync You configure two load balancers as shown in the following table. Name Location Type Pool LB1 East US Standard HANA1, HANA2 LB2 East US Basic App1, App2 For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. tatements Yes No HANA2 and HANA3 are in a supported configuration. ( ) ( ) App1 and App2 are in a supported configuration. ( ) ( ) Azure Site Recovery is in a supported configuration for App1 and App2 to fail over to the South Central US Azure region. ( ) ( )
Final Answers for Hotspot: HANA2 and HANA3 are in a supported configuration: No App1 and App2 are in a supported configuration: Yes Azure Site Recovery is in a supported configuration for App1 and App2 to fail over to the South Central US Azure region: Yes Summary of Reasoning: No for HANA2 and HANA3: Sync mode replication across regions (East US to South Central US) is unsupported due to latency constraints—relevant to AZ-120’s focus on SAP HANA replication rules. Yes for App1 and App2: Load-balanced Web Dispatchers in the same region with a Basic SKU load balancer is a supported setup—aligned with Azure SAP architecture. Yes for ASR: ASR supports VM failover across regions for SAP components like Web Dispatcher, a key DR concept in AZ-120, despite requiring additional DR configuration.
63
DRAG DROP Your on-premises network contains an Active Directory domain. You are deploying a new SAP environment on Azure. You need to configure SAP Single Sign-On to ensure that users can authenticate to SAP GUI and SAP WebGUI. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions: Configure secure network communication (SNC) by using SNCWIZARD. Change the user profiles for secure network communication (SNC). Create an Azure Key Vault service endpoint. Change and deploy the logon file. Deploy Azure Active Directory Domain Services (Azure AD DS) and sync back to Active Directory. Answer Area:
Final Answer: Answer Area: Deploy Azure Active Directory Domain Services (Azure AD DS) and sync back to Active Directory Configure secure network communication (SNC) by using SNCWIZARD Change the user profiles for secure network communication (SNC) Change and deploy the logon file Why This Sequence is Correct: Step 1: Establishes the Azure AD DS domain, syncing on-premises AD users and enabling Kerberos authentication in Azure. Step 2: Configures the SAP system to use SNC with Azure AD DS, setting up the server-side SSO mechanism. Step 3: Updates SAP user profiles to map to AD users and enable SNC logon, ensuring user-level SSO functionality. Step 4: Updates and deploys the SAP GUI logon file to clients, completing the end-to-end SSO experience.
64
You need direct connectivity from an on-premises network to SAP HANA (Large Instances). The solution must meet the following requirements: Minimize administrative effort. Provide the highest level of resiliency. What should you use? A. ExpressRoute Global Reach B. Linux IPTables C. ExpressRoute D. NGINX as a reverse proxy
the best option among the choices provided is: C. ExpressRoute Why ExpressRoute is the correct answer: Direct Connectivity: Azure ExpressRoute provides a private, dedicated connection between an on-premises network and Azure data centers. This is ideal for SAP HANA Large Instances, which are deployed in Azure data centers and require low-latency, high-bandwidth, and secure connectivity. Minimized Administrative Effort: ExpressRoute is a managed service by Microsoft and the network provider. Once set up, it requires minimal ongoing administrative overhead compared to manually configuring and maintaining solutions like IPTables or NGINX. Highest Level of Resiliency: ExpressRoute supports high availability through features like dual circuits (active/active or active/passive configurations) and integration with Azure’s backbone network. This ensures redundancy and failover capabilities, meeting the resiliency requirement. SAP HANA Compatibility: Microsoft explicitly recommends ExpressRoute for connecting to SAP HANA Large Instances due to its performance and reliability, as outlined in Azure documentation for SAP workloads.
65
HOTSPOT You have an on-premises SAP HANA deployment that uses HANA system replication in synchronous mode. You plan to migrate the on-premises deployment to Azure and use ultra disks as part of the HANA deployment on Azure. You need to configure storage resources and high availability for the HANA deployment on Azure. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Use the ultra disks as: Data disks only Data disks and the operating system disk Data disks, the operating system disk, and the temporary disk Implement high availability by using: Availability sets Availability zones Multiple regions
Final Answers for Hotspot: Use the ultra disks as: Data disks only Implement high availability by using: Availability sets Summary of Reasoning: Data disks only: Ultra disks are supported only for data disks in Azure VMs and meet SAP HANA’s performance needs for data and log volumes (AZ-120 focus on storage optimization). OS and temporary disks don’t require ultra disks and aren’t supported as such. Availability sets: Provides HA with low latency within a region, enabling Sync mode HANA System Replication, aligning with the on-premises setup and Azure HA best practices for SAP (AZ-120 focus on HA/DR).
66
You are planning the Azure network infrastructure to support the disaster recovery requirements. What is the minimum number of virtual networks required for the SAP deployment? A. 1 B. 2 C. 3 D. 4
Final Answer: B. 2 Why B is Correct: Azure DR Best Practices: Microsoft’s SAP on Azure documentation (e.g., “SAP HANA High Availability and Disaster Recovery”) recommends a paired-region approach. For example, a production VNet in East US and a DR VNet in West US, connected via VNet peering or ExpressRoute, support HANA System Replication or Azure Site Recovery. SAP Architecture: A typical SAP landscape (HANA DB, app servers, ASCS/SCS) can reside in one VNet per region, with subnets separating tiers. DR duplicates this in a second region, requiring a second VNet. AZ-120 Context: The exam tests knowledge of Azure networking for SAP, emphasizing efficient, minimal designs. Two VNets align with the simplest DR-capable architecture.
67
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements The Azure Extension for SAP stores performance data in an Azure Storage account. ( ) Yes ( ) No You can enable the Azure Extension for SAP on a SUSE Linux Enterprise Server 12 (SLES 12) server by running the Set-AzVNAMEEExtension cmdlet. ( ) Yes ( ) No You can enable the Azure Extension for SAP on a server that runs Windows Server 2016 by running the Set-AzVMAEMEExtension cmdlet. ( ) Yes ( ) No
Correct Answers: Statement 1: No (The Azure Extension for SAP does not store performance data in an Azure Storage account.) Statement 2: Yes (You can enable the Azure Extension for SAP on SLES 12 using Set-AzVMAEMExtension, assuming a typo in the cmdlet name.) Statement 3: Yes (You can enable the Azure Extension for SAP on Windows Server 2016 using Set-AzVMAEMExtension, assuming a typo in the cmdlet name.)
68
You plan to deploy an SAP production landscape in Azure. You need to recommend an Azure support agreement for the deployment. The solution must meet the following requirements: Receive support for moderate business impact events within four hours. Comply with the SAP support agreement. Minimize costs. Which support level should you recommend? A. Basic B. Professional Direct C. Developer D. Standard
Final Answer: D The Standard support level is the correct choice, balancing the 4-hour response, SAP compliance, and cost minimization for an SAP production landscape on Azure. Correct Answer: D. Standard Why it’s correct: 4-Hour Response: The Standard plan guarantees an initial response within 4 hours for Severity B incidents (moderate business impact), meeting the requirement precisely. SAP Compliance: SAP on Azure production landscapes require at least the Standard support level for 24/7 technical support and integration with SAP’s support ecosystem (e.g., via SAP Solution Manager). Microsoft’s SAP on Azure documentation confirms Standard as the minimum for production. Minimize Costs: At $100/month, Standard is significantly cheaper than Professional Direct ($1,000/month) while meeting all requirements, unlike Basic (free but no support) or Developer (~$29/month but insufficient for production). AZ-120 Relevance: The exam tests understanding of Azure support plans for SAP workloads, and Standard is the baseline for production SAP deployments.
69
You have an SAP production landscape that uses SAP HANA databases on Azure. You need to deploy a disaster recovery solution to the SAP HANA databases. The solution must meet the following requirements: Support failover between Azure regions. Minimize data loss in the event of a failover. What should you deploy? A. HANA system replication that uses asynchronous replication B. Azure Site Recovery C. Always On availability group D. HANA system replication that uses synchronous replication
Final Answer: D. HANA system replication that uses synchronous replication Why D is Correct: Failover Between Regions: HSR in synchronous mode can replicate HANA databases across Azure regions, enabling DR failover. Microsoft’s SAP on Azure documentation (e.g., “SAP HANA disaster recovery”) supports this with region pairs like East US and West US, connected via low-latency networking. Minimize Data Loss: Synchronous replication ensures every transaction is replicated to the secondary system before completion, achieving an RPO of 0. This is critical for production SAP HANA landscapes where data integrity is paramount. SAP HANA Native: HSR is SAP’s built-in replication technology, optimized for HANA databases on Azure (e.g., on M-series VMs or HANA Large Instances). It outperforms generic solutions like ASR for database-level consistency.
70
You have an Azure subscription. You plan to deploy a virtual machine named VM1 that will have the following configurations: Data disk size: 4 TB Generation: Generation 2 Data disk type: Ultra disk Data disk encryption type: Double encryption VM1 will host the SAP global transport directory in a volume on the data disk. You need to ensure that you can replicate VM1 by using Azure Site Recovery. Which configuration should you change? A. generation B. data disk type C. data disk encryption type D. data disk size
Correct Answer: B. Data disk type Why This Is Correct: Reasoning: Azure Site Recovery does not support replication of VMs with Ultra disks. To enable replication for VM1, the data disk type must be changed from Ultra disk to a supported type, such as Premium SSD or Standard SSD. Premium SSD is typically recommended for SAP workloads due to its balance of performance and cost, and it aligns with the AZ-120 exam’s focus on disaster recovery planning for SAP systems. Changing the disk type to a supported option ensures ASR can replicate VM1 without compromising the SAP global transport directory’s functionality (which doesn’t demand Ultra disk performance). Why Others Are Incorrect: A. Generation: Generation 2 is supported by ASR, so no change is required. C. Data disk encryption type: Double encryption is compatible with ASR, so it’s not a barrier. D. Data disk size: 4 TB is within ASR’s supported limits, so it’s not an issue.
71
HOTSPOT For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements When you deploy two standalone SAP Web Dispatchers to separate clustered virtual machines, you must deploy a load balancer to make the solution highly available. ( ) Yes ( ) No When you deploy Primary Application Server (PAS) and Additional Application Server (AAS) instances on separate virtual machines for SAP NetWeaver, you must deploy an Azure load balancer for high availability. ( ) Yes ( ) No When using an availability group listener for SAP application connectivity to Microsoft SQL Server servers in different Azure regions, you must deploy a load balancer in front of the disaster recovery SQL Server virtual machine. ( ) Yes ( ) No
Final Answers for Hotspot: When you deploy two standalone SAP Web Dispatchers to separate clustered virtual machines, you must deploy a load balancer to make the solution highly available: Yes When you deploy Primary Application Server (PAS) and Additional Application Server (AAS) instances on separate virtual machines for SAP NetWeaver, you must deploy an Azure load balancer for high availability: No When using an availability group listener for SAP application connectivity to Microsoft SQL Server servers in different Azure regions, you must deploy a load balancer in front of the disaster recovery SQL Server virtual machine: No Summary of Reasoning: Yes for Web Dispatchers: A load balancer is the standard Azure method to ensure HA for SAP Web Dispatchers, aligning with AZ-120 HA architectures. No for PAS/AAS: SAP NetWeaver’s internal load balancing via the Message Server handles HA for PAS/AAS, not requiring an Azure Load Balancer (AZ-120 focus on SAP-specific HA). No for SQL DR: A cross-region AG listener doesn’t mandate a load balancer in the DR region for basic failover, per Azure SQL Server DR patterns for SAP (AZ-120 DR focus).
72
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You deploy SAP HANA on Azure (Large Instances). You need to back up the SAP HANA database to Azure. Solution: You use a third-party tool that uses backint to back up the SAP HANA database to Azure storage. Does this meet the goal? Yes No
Final Answer: Yes Reason: Using a third-party tool that leverages the backint interface to back up the SAP HANA database from Azure (Large Instances) to Azure storage meets the goal. It provides a certified, reliable method to store HANA backups in Azure, consistent with SAP HANA and Azure best practices. Why It Meets the Goal: Functionality: A third-party tool with backint can successfully back up the HANA database, including full, incremental, and log backups, to Azure storage. Azure Integration: The solution leverages Azure storage as the backup target, fulfilling the requirement to back up “to Azure.” SAP Certification: Backint is an SAP-certified interface, ensuring compatibility with HANA on HLI and reliable, application-consistent backups.
73
You have an on-premises SAP environment hosted on VMware vSphere. You plan to migrate the environment to Azure by using Azure Site Recovery. You need to prepare the environment to support Azure Site Recovery. What should you deploy first? an on-premises data gateway to vSphere Microsoft System Center Virtual Machine Manager (VMM) an Azure Backup server a configuration server to vSphere
Correct Answer: D. A configuration server to vSphere Why This Is Correct: Reasoning: Azure Site Recovery’s migration process for VMware vSphere environments begins with deploying a configuration server on-premises within the vSphere environment. This server is essential for setting up replication of the SAP VMs to Azure. It handles tasks such as registering the VMware infrastructure with ASR, installing the mobility service on source VMs, and managing data replication to the Azure target region. The AZ-120 exam emphasizes practical steps for SAP workload migration, and deploying the configuration server is the foundational step for VMware-to-Azure migrations using ASR.
74
HOTSPOT - You have an on-premises SAP environment. Application servers run on SUSE Linux Enterprise Server (SLES) servers. Databases run on SLES servers that have Oracle installed. You need to recommend a solution to migrate the environment to Azure. The solution must use currently deployed technologies whenever possible and support high availability. What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Application server operating system: 🔽 Oracle Linux 🔽 SLES 🔽 Windows Server 2016 Database server operating system: 🔽 Oracle Linux 🔽 SLES 🔽 Windows Server 2016 Database platform: 🔽 Azure SQL Database 🔽 Microsoft SQL Server 🔽 Oracle 🔽 SAP HANA
Final Answers for Hotspot: Application server operating system: SLES Database server operating system: Oracle Linux Database platform: Oracle Summary of Reasoning: Application Server OS - SLES: Retains the current OS, supports SAP NetWeaver HA with clustering (e.g., Pacemaker) and load balancing in Azure (AZ-120 focus on OS compatibility). Database Server OS - SLES: Keeps the existing OS, supports Oracle HA with Data Guard on Azure VMs (AZ-120 focus on HA configurations). Database Platform - Oracle: Uses the current database, certified for SAP on Azure, and enables HA with Oracle tools (AZ-120 focus on database migration/support).
75
DRAG DROP - You have an SAP environment on Azure. You use Azure Recovery Services to back up an SAP application server. You need to test the restoration process of a file on the server. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Actions | Answer Area Download and run the mount disk executable From Azure Cloud Shell, run the Get-AzBackupItem cmdlet From Backup dashboard menu, select File Recovery Recover the file and unmount the disk From Azure Cloud Shell, run the Get-AzBackupRecoveryPoint cmdlet
Final Answer: Answer Area: From Backup dashboard menu, select File Recovery Download and run the mount disk executable Recover the file and unmount the disk Why This Sequence is Correct: Step 1: Initiates the file recovery process from the Azure Backup dashboard, selecting the SAP application server and a recovery point. Step 2: Downloads and executes the script to mount the backup as a disk, making the files accessible. Step 3: Recovers the file and cleans up by unmounting the disk, completing the test of the restoration process. Alignment with Azure Backup: Microsoft’s documentation (e.g., “Recover files from Azure VM backup”) outlines this exact workflow: Go to the Recovery Services vault > Backup items > Select VM > File Recovery. Download and run the mount script. Copy files and unmount the disk.
76
This question requires that you evaluate the underlined text to determine if it is correct. When deploying SAP HANA to an Azure virtual machine, you can enable Write Accelerator to reduce the latency between the SAP application servers and the database layer. Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct. No change is needed install the Mellanox driver start the NIPING service enable Accelerated Networking
Correct Answer: No change is needed Why This Is Correct: Reasoning: Despite the slightly imprecise wording, the statement is functionally correct in the context of SAP HANA on Azure. Write Accelerator is a valid feature to enable on the SAP HANA VM’s log disk to reduce write latency, which enhances database performance and indirectly benefits the interaction between SAP application servers and the database layer. The AZ-120 exam often tests practical application over pedantic phrasing, and Write Accelerator is a well-documented optimization for SAP HANA deployments (supported on M-series VMs with Premium SSDs). None of the alternative options (Mellanox driver, NIPING, or Accelerated Networking) align with the intent of reducing disk-related latency for SAP HANA. Why Others Are Incorrect: Install the Mellanox driver: Unrelated to disk latency or Write Accelerator. Start the NIPING service: A network diagnostic tool, not a performance enhancer for disks. Enable Accelerated Networking: Addresses network latency, not disk write latency, which is the focus of Write Accelerator.
77
Case Study This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Overview Litware, Inc. is an international manufacturing company that has 3,000 employees. Litware has two main offices. The offices are located in Miami, FL, and Madrid, Spain. Existing Environment Infrastructure Litware currently uses a third-party provider to host a datacenter in Miami and a disaster recovery datacenter in Chicago, IL. The network contains an Active Directory domain named litware.com. Litware has two third-party applications hosted in Azure. Litware already implemented a site-to-site VPN connection between the on-premises network and Azure. SAP Environment Litware currently runs the following SAP products: - Enhancement Pack6 for SAP ERP Central Component 6.0 (SAP ECC 6.0) - SAP Extended Warehouse Management (SAP EWM) - SAP Supply Chain Management (SAP SCM) - SAP NetWeaver Process Integration (PI) - SAP Business Warehouse (SAP BW) - SAP Solution Manager All servers run on the Windows Server platform. All databases use Microsoft SQL Server. Currently, you have 20 production servers. You have 30 non-production servers including five testing servers, five development servers, five quality assurance (QA) servers, and 15 pre-production servers. Currently, all SAP applications are in the litware.com domain. Problem Statements The current version of SAP ECC has a transaction that, when run in batches overnight, takes eight hours to complete. You confirm that upgrading to SAP Business Suite on HANA will improve performance because of code changes and the SAP HANA database platform. Litware is dissatisfied with the performance of its current hosted infrastructure vendor. Litware experienced several hardware failures and the vendor struggled to adequately support its 24/7 business operations. Requirements Business Goals Litware identifies the following business goals: - Increase the performance of SAP ECC applications by moving to SAP HANA. All other SAP databases will remain on SQL Server. - Move away from the current infrastructure vendor to increase the stability and availability of the SAP services. - Use the new Environment, Health and Safety (EH&S) in Recipe Management function. - Ensure that any migration activities can be completed within a 48-hour period during a weekend. Planned Changes Litware identifies the following planned changes: - Migrate SAP to Azure. - Upgrade and migrate SAP ECC to SAP Business Suite on HANA Enhancement Pack 8. Technical Requirements Litware identifies the following technical requirements: - Implement automated backups. - Support load testing during the migration. - Identify opportunities to reduce costs during the migration. - Continue to use the litware.com domain for all SAP landscapes. - Ensure that all SAP applications and databases are highly available. - Establish an automated monitoring solution to avoid unplanned outages. - Remove all SAP components from the on-premises network once the migration is complete. - Minimize the purchase of additional SAP licenses. SAP HANA licenses were already purchased. - Ensure that SAP can provide technical support for all the SAP landscapes deployed to Azure. What should you use to perform load testing as part of the migration plan? JMeter SAP LoadRunner by Micro Focus Azure Application Insights Azure Monitor
Final Answer: B Correct Answer: B. SAP LoadRunner by Micro Focus Why it’s correct: SAP-Specific Testing: LoadRunner is designed for SAP environments, supporting protocols like SAP GUI, RFC, and web traffic, which are critical for testing SAP ECC (soon Business Suite on HANA), EWM, SCM, PI, and BW in Litware’s landscape. It can simulate the 8-hour batch job and validate HANA’s performance improvement. Migration Validation: It meets the technical requirement to “support load testing during the migration” by ensuring Azure infrastructure (e.g., VMs, HANA storage) handles production loads within the 48-hour window. Industry Standard: SAP LoadRunner is a recognized tool for SAP performance testing, often cited in SAP migration guides and compatible with Azure deployments, aligning with AZ-120’s focus on SAP workload planning. HA Support: It can test HA configurations (e.g., load-balanced Web Dispatchers or HANA replication) to ensure stability post-migration.
78
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You deploy SAP HANA on Azure (Large Instances). You need to back up the SAP HANA database to Azure. Solution: You use a third-party tool that uses backint to back up the SAP HANA database to Azure storage. Does this meet the goal? Yes No
Final Answer: Yes Reason: Using a third-party tool that leverages the backint interface to back up the SAP HANA database from Azure (Large Instances) to Azure storage meets the goal. It provides a certified, effective method to store HANA backups in Azure, consistent with SAP HANA and Azure best practices for backup solutions. Why It Meets the Goal: Functionality: The third-party tool with backint can back up the HANA database (data, logs, and catalog) reliably, ensuring consistency and recovery capability. Azure Target: Writing backups to Azure storage fulfills the goal of backing up “to Azure.” SAP Support: Backint is an SAP-certified interface, widely used for HANA backups, including on HLI, as per SAP and Microsoft documentation.
79
DRAG DROP You have an SAP environment on Azure. You use Azure Site Recovery to protect an SAP production landscape. You need to validate whether you can recover the landscape in the event of a failure. The solution must minimize the impact on the landscape. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Actions | Answer Area Validate the SAP production landscape Create a virtual network that has the same subnets as the SAP production landscape Create a network security group (NSG) that restricts traffic to the primary region Shut down production virtual machines Select Test failover from the Recovery Plans blade Add a public IP address to a management server in the disaster recovery region
Correct Sequence of Actions Create a virtual network that has the same subnets as the SAP production landscape Create a network security group (NSG) that restricts traffic to the primary region Select Test failover from the Recovery Plans blade Validate the SAP production landscape Explanation of the Sequence and Why It’s Correct Create a virtual network that has the same subnets as the SAP production landscape Why it’s correct: Azure Site Recovery requires a test virtual network to perform a test failover. This network should mimic the production network (including the same subnets) to ensure the recovered environment closely resembles the original setup. Microsoft recommends using an isolated network for test failovers to avoid interference with the production environment. This step must come first because the test failover process needs a target network to bring up the replicated virtual machines (VMs). Minimizing impact: By creating a separate, isolated virtual network, you ensure that the test failover does not disrupt the production landscape. Create a network security group (NSG) that restricts traffic to the primary region Why it’s correct: After setting up the virtual network, you need to ensure that the test environment remains isolated from the production environment. An NSG can restrict traffic, preventing the test VMs from communicating with the primary region’s production resources. This step enhances security and isolation, which is critical for minimizing impact during validation. Minimizing impact: Restricting traffic ensures that the test failover does not accidentally affect production workloads or cause conflicts (e.g., duplicate IP addresses). Select Test failover from the Recovery Plans blade Why it’s correct: Once the virtual network and NSG are in place, you can initiate the test failover. In Azure Site Recovery, the “Test Failover” option in the Recovery Plans blade allows you to simulate the recovery of the SAP landscape. This action brings up the replicated VMs in the isolated test network without affecting the production VMs, which remain running. Minimizing impact: The test failover is designed to be non-disruptive. It does not require shutting down production VMs, as ASR creates a separate instance of the environment for testing. Validate the SAP production landscape Why it’s correct: After the test failover is complete, you validate the recovered SAP landscape in the test environment. This involves checking that the SAP applications, databases, and services are functioning as expected in the simulated disaster recovery scenario. This step confirms the recoverability of the landscape without altering the production environment. Minimizing impact: Validation occurs in the isolated test environment, ensuring no changes or interruptions to the live production landscape.
80
You recently migrated an SAP HANA environment to Azure. You plan to back up SAP HANA databases to disk on the virtual machines, and then move the backup files to Azure Blob storage for retention. Which command should you run to move the backups to the Blob storage? robocopy backint azcopy scp
Final Answer: C azcopy is the correct command to move SAP HANA backup files from VM disk to Azure Blob Storage, aligning with Azure’s storage tools and AZ-120 objectives. Correct Answer: C. azcopy Why it’s correct: Purpose-Built for Azure: AzCopy is Microsoft’s dedicated tool for moving files to Azure Blob Storage, making it the most efficient and supported choice for this task. Compatibility: Works on Linux VMs (e.g., SLES or RHEL for SAP HANA), where backups are stored on disk, and supports direct uploads to Blob Storage. SAP on Azure Alignment: Azure documentation for SAP HANA backups recommends tools like AzCopy for offloading disk-based backups to Blob Storage for retention, a common pattern in AZ-120 (e.g., backup automation and cost optimization). Simplicity: A single command moves files from the VM to Blob Storage, meeting the scenario’s needs without additional complexity.
80
Case Study This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Overview Contoso, Ltd. is a manufacturing company that has 15,000 employees. The company uses SAP for sales and manufacturing. Contoso has sales offices in New York and London and manufacturing facilities in Boston and Seattle. Existing Environment Active Directory The network contains an on-premises Active Directory domain named ad.contoso.com. User email addresses use a domain name of contoso.com. SAP Environment The current SAP environment contains the following components: - SAP Solution Manager - SAP ERP Central Component (SAP ECC) - SAP Supply Chain Management (SAP SCM) - SAP application servers that run Windows Server 2008 R2 - SAP HANA database servers that run SUSE Linux Enterprise Server 12 (SLES 12) Problem Statements Contoso identifies the following issues in its current environment: - The SAP HANA environment lacks adequate resources. - The Windows servers are nearing the end of support. - The datacenters are at maximum capacity. Requirements Planned Changes Contoso identifies the following planned changes: - Deploy Azure Virtual WAN. - Migrate the application servers to Windows Server 2016. - Deploy ExpressRoute connections to all of the offices and manufacturing facilities. - Deploy SAP landscapes to Azure for development, quality assurance, and production. All resources for the production landscape will be in a resource group named SAP Production. Business goals Contoso identifies the following business goals: - Minimize costs whenever possible. - Migrate SAP to Azure without causing downtime. - Ensure that all SAP deployments to Azure are supported by SAP. - Ensure that all the production databases can withstand the failure of an Azure region. - Ensure that all the production application servers can restore daily backups from the last 21 days. Technical Requirements Contoso identifies the following technical requirements: - Inspect all web queries. - Deploy an SAP HANA cluster to two datacenters. - Minimize the bandwidth used for database synchronization. - Use Active Directory accounts to administer Azure resources. - Ensure that each production application server has four 1-TB data disks. - Ensure that an application server can be restored from a backup created during the last five days within 15 minutes. - Implement an approval process to ensure that an SAP administrator is notified before another administrator attempts to make changes to the Azure virtual machines that host SAP. It is estimated that during the migration, the bandwidth required between Azure and the New York office will be 1 Gbps. After the migration, a traffic burst of up to 3 Gbps will occur. Proposed Backup Policy An Azure administrator proposes the backup policy shown in the following exhibit. Policy name: ✅ SapPolicy Backup schedule Frequency: Daily Time: 3:30 AM Timezone: (UTC) Coordinated Universal Time Instant Restore Retain instant recovery snapshot(s) for 5 Day(s) Retention range ✅ Retention of daily backup point At: 3:30 AM For: 14 Day(s) ✅ Retention of weekly backup point On: Sunday At: 3:30 AM For: 8 Week(s) ✅ Retention of monthly backup point Week Based - Day Based On: First Sunday At: 3:30 AM For: 12 Month(s) ✅ Retention of yearly backup point Week Based - Day Based In: January On: First Sunday At: 3:30 AM For: 7 Year(s) An Azure administrator provides you with the Azure Resource Manager template that will be used to provision the production application servers. { "apiVersion": "2017-03-30", "type": "Microsoft.Compute/virtualMachines", "name": "[parameters('vmname')]", "location": "EastUS", "dependsOn": [ "[resourceId('Microsoft.Network/networkInterfaces/', parameters('vmname'))]" ], "properties": { "hardwareProfile": { "vmSize": "[parameters('vmSize')]" }, "osProfile": { "computerName": "[parameters('vmname')]", "adminUsername": "[parameters('adminUsername')]", "adminPassword": "[parameters('adminPassword')]" }, "storageProfile": { "ImageReference": { "publisher": "MicrosoftWindowsServer", "offer": "WindowsServer", "sku": "2016-datacenter", "version": "latest" }, "osDisk": { "name": "[concat(parameters('vmname'), '-OS')]", "caching": "ReadWrite", "createOption": "FromImage", "diskSizeGB": 128, "managedDisk": { "storageAccountType": "[parameters('storageAccountType')]" } }, "copy": [ { "name": "DataDisks", "count": "[parameters('diskCount')]", "input": { "caching": "None", "diskSizeGB": 1024, "lun": "[copyIndex('datadisks')]" } } ] } } } Once the migration completes, to which size should you set the ExpressRoute circuit to the New York office to meet the business goals and technical requirements? 500 Mbps 1,000 Mbps 2,000 Mbps 5,000 Mbps
Final Answer: 5,000 Mbps Why 5,000 Mbps is Correct: Post-Migration Focus: The question asks for the size after migration, where the 3 Gbps burst is the key requirement. The migration’s 1 Gbps need is temporary and irrelevant post-migration. Technical Requirement: A 5,000 Mbps circuit ensures the SAP production landscape in Azure (e.g., SAP ECC, SCM, HANA) can handle peak traffic from the New York office without latency or throttling, aligning with SAP’s performance expectations. Business Goal Balance: While 5,000 Mbps is more expensive than 2,000 Mbps, it’s the minimum size that meets the 3 Gbps burst, avoiding under-provisioning that could lead to costly downtime or performance issues (indirectly increasing costs). AZ-120 Context: The exam tests practical Azure planning for SAP, including sizing ExpressRoute for production workloads. Over-provisioning slightly (5 Gbps vs. 3 Gbps) is safer than under-provisioning for critical systems.
81
You are planning the Azure network infrastructure to support the disaster recovery requirements. What is the minimum number of virtual networks required for the SAP deployment? 1 2 3 4
Final Answer: 2 Why 2 is Correct: Standalone: The simplest SAP DR setup requires two VNets—one per region—to replicate the SAP landscape (e.g., HANA HSR, app server failover). This is a standard Azure pattern for SAP DR, per Microsoft’s documentation (e.g., “SAP HANA disaster recovery on Azure”). Contoso Case: The explicit DR requirement (“withstand the failure of an Azure region”) and HANA clustering across “two datacenters” reinforce the need for two VNets: one for the primary production landscape and one for the DR site. AZ-120 Focus: The exam tests efficient Azure networking for SAP, emphasizing the minimum viable design. Two VNets align with this principle, avoiding unnecessary complexity.
82
HOTSPOT You have an on-premises SAP environment. Backups are performed by using tape backups. There are 50 TB of backups. A Windows file server has BMP images of checks used by SAP Finance. There are 9 TB of images. You need to recommend a method to migrate the images and the tape backups to Azure. The solution must maintain continuous replication of the images. What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Tape backups: ▼ AzCopy Azure Data Box Edge Azure Databox Azure Storage Explorer File server: ▼ AzCopy Azure Data Box Edge Azure Databox Azure Storage Explorer
Final Answers: Tape backups: Azure Data Box File server: Azure Data Box Edge Why These Are Correct: Tape Backups - Azure Data Box: Reason: 50 TB is a large, static dataset best suited for offline transfer. Azure Data Box allows you to stage the tape backups (e.g., via a server), ship the device to Azure, and upload to Blob Storage efficiently. No continuous replication is needed, as backups are typically point-in-time copies. AZ-120 Fit: The exam tests knowledge of Azure data migration tools, and Data Box is the standard for large offline transfers like tape backups. File Server - Azure Data Box Edge: Reason: 9 TB can be migrated initially with Data Box Edge, and its continuous sync capability (e.g., with Azure Blob Storage or File Storage) ensures ongoing replication of the BMP images as they change on the Windows file server. This meets the “continuous replication” requirement, critical for active SAP Finance data. AZ-120 Fit: Data Box Edge is highlighted in Azure for hybrid scenarios requiring real-time data sync, aligning with SAP’s need for up-to-date financial images.
83
You need direct connectivity from an on-premises network to SAP HANA (Large Instances). The solution must meet the following requirements: - Minimize administrative effort. - Provide the highest level of resiliency. What should you use? ExpressRoute Global Reach Linux IPTables ExpressRoute NGINX as a reverse proxy
Correct Answer: C. ExpressRoute Why This Is Correct: Reasoning: ExpressRoute provides a private, dedicated connection from an on-premises network to Azure, including SAP HANA Large Instances, which are deployed in Azure data centers. It minimizes administrative effort by leveraging a managed service (setup involves configuring a circuit and peering, then it’s maintained by Microsoft and the provider). It offers the highest resiliency through options like dual circuits and Azure’s redundant backbone network, ensuring continuous availability even if a single path fails. This aligns with Microsoft’s best practices for SAP HANA on Azure and the AZ-120 exam’s focus on infrastructure planning for SAP workloads.
84
You have an on-premises SAP environment hosted on VMware vSphere. You plan to migrate the environment to Azure by using Azure Site Recovery. You need to prepare the environment to support Azure Site Recovery. What should you deploy first? an on-premises data gateway to vSphere Microsoft System Center Virtual Machine Manager (VMM) an Azure Backup server a configuration server to vSphere
Correct Answer: A configuration server to vSphere Why It’s Correct: The migration of an on-premises SAP environment hosted on VMware vSphere to Azure using Azure Site Recovery requires setting up replication from the VMware VMs to Azure. The configuration server is the foundational component in this process. It is deployed as a VM in the vSphere environment using an Open Virtualization Appliance (OVA) template provided by Azure. Once deployed, it: Registers with the Azure Recovery Services Vault. Discovers VMs in the vSphere environment (via vCenter or ESXi hosts). Coordinates replication of VM data to Azure storage. Manages the installation of the Mobility Service on source VMs, which handles the actual replication.
85
HOTSPOT You have an on-premises SAP environment. Application servers run on SUSE Linux Enterprise Server (SLES) servers. Databases run on SLES servers that have Oracle installed. You need to recommend a solution to migrate the environment to Azure. The solution must use currently deployed technologies whenever possible and support high availability. What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Application server operating system: ▼ Oracle Linux SLES Windows Server 2016 Database server operating system: ▼ Oracle Linux SLES Windows Server 2016 Database platform: ▼ Azure SQL Database Microsoft SQL Server Oracle SAP HANA
Summary of Answers Application server operating system: SLES Database server operating system: SLES Database platform: Oracle Why This Is Correct Alignment with Current Technologies: The on-premises environment uses SLES for both application and database servers and Oracle as the database. Keeping these technologies in Azure minimizes migration complexity and adheres to the question’s constraint. High Availability Support: SLES on Azure supports SAP high-availability scenarios (e.g., Pacemaker for application servers and Oracle Data Guard for the database), and Azure provides additional HA features like availability zones and load balancers. Oracle is also certified for SAP workloads on Azure with HA configurations. AZ-120 Context: The AZ-120 exam focuses on planning and administering SAP workloads on Azure, including migration strategies that leverage existing setups while ensuring HA/DR. This solution reflects a "lift-and-shift" approach, which is a common migration scenario covered in the exam.