test0 Flashcards
(87 cards)
You plan to deploy an SAP landscape on Azure that will use SAP HANA on Azure (Large Instances).
You need to ensure that outbound traffic from the application tier can flow only to the database tier.
What should you use?
A
application security groups
B
network security groups (NSGs)
C
Azure Firewall
D
network virtual appliances (NVAs)
The correct answer is: B. Network Security Groups (NSGs)
Why NSGs Are Correct:
SAP HANA on Azure (Large Instances) Context: In this architecture, the application tier runs on Azure VMs within a VNet, while the HANA Large Instances are connected via ExpressRoute or a similar network link. Microsoft documentation for SAP HANA Large Instances recommends using NSGs to secure traffic between tiers (e.g., application to database). NSGs are applied to the subnet hosting the application VMs to control outbound traffic to the HANA Large Instances subnet or IP range.
Requirement Fit: The question specifies that outbound traffic from the application tier must flow only to the database tier. NSGs allow you to:
Create an outbound rule allowing traffic from the application subnet to the HANA Large Instances IP range (e.g., TCP port 30315 for HANA instance 03).
Add a lower-priority “Deny All” rule to block all other outbound traffic.
AZ-120 Relevance: The exam emphasizes practical Azure networking solutions for SAP workloads. NSGs are a fundamental, cost-effective, and native Azure tool for securing SAP landscapes, making them the best fit here.
Example NSG Configuration:
Rule 1: Allow outbound from application subnet (e.g., 10.0.1.0/24) to HANA subnet (e.g., 10.0.2.0/24) on HANA ports (e.g., 30315). Priority: 100.
Rule 2: Deny all outbound traffic. Priority: 200.
This ensures that application tier traffic is restricted to the database tier only.
Final Answer:
B. Network Security Groups (NSGs)
Why: NSGs provide the simplest, most direct, and Azure-native method to enforce the traffic restriction, aligning with SAP on Azure best practices and the AZ-120 exam’s focus on practical deployment scenarios.
You have an Azure tenant and an SAP Cloud Platform tenant.
You need to ensure that users sign in automatically by using their Azure AD accounts when they connect to SAP Cloud Platform.
Which four actions should you perform in sequence? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order.
Actions Answer Area
[ ] Configure the SAML settings for the Identifier and Reply URL.
[ ] From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file.
[ ] From the Azure Active Directory admin center, configure the SAP Cloud Platform Identity app to use the Federation Metadata XML file.
[ ] From the Azure Active Directory admin center, download the Federation Metadata XML file.
[ ] From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app.
[ < ] [ > ] [ ↑ ] [ ↓ ]
Final Answer Area:
Answer Area:
1. From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app.
2. From the Azure Active Directory admin center, download the Federation Metadata XML file.
3. From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file.
4. Configure the SAML settings for the Identifier and Reply URL.
Correct Four Actions and Sequence:
The question requires four actions, and action 5 appears redundant or incorrect in this context. The standard SAML SSO setup process between Azure AD and SAP Cloud Platform IAS excludes action 5, focusing on registering the app, exchanging metadata, and configuring endpoints. The correct sequence is:
From the Azure Active Directory admin center, add the SAP Cloud Platform Identity Authentication enterprise app.
Why: Registering the SAP app in Azure AD is the starting point for SSO configuration.
Order: First, as it creates the app object in Azure AD.
From the Azure Active Directory admin center, download the Federation Metadata XML file.
Why: The metadata file provides Azure AD’s SAML details (e.g., Sign-On URL, certificate) needed by SAP.
Order: Second, after the app is added, you can access and download this file from the SSO settings.
From the SAP Cloud Platform Identity administration console, configure a corporate identity provider to use the Federation Metadata XML file.
Why: Upload the Azure AD metadata to SAP IAS to establish trust, enabling SAP to accept Azure AD authentication.
Order: Third, after obtaining the metadata file, configure SAP to use it.
Configure the SAML settings for the Identifier and Reply URL.
Why: In Azure AD, configure the SAP app’s SAML settings with the Identifier and Reply URL (obtained from SAP IAS) to complete the federation. This ensures SAML assertions are sent to SAP correctly.
Order: Fourth, as this finalizes the Azure AD side after SAP is configured, though in practice, you might need SAP values earlier (iterative process simplified here).
You plan to deploy an SAP production landscape on Azure.
You need to identify which virtual machine series to use for the SAP HANA role and the SAP Central Services (SCS) role. The solution must meet the following requirements:
- Provide 384 GB of memory for the HANA role.
- Support ultra disks for the HANA role.
- Meet SAP certification.
- Minimize costs.
Which virtual machine series should you identify for each role? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
HANA:
[ ▼ ]
A-Series
D-Series
E-Series
SCS:
[ ▼ ]
A-Series
B-Series
E-Series
Final Answer Area:
Answer Area
HANA: [E-Series]
SCS: [B-Series]
Why These Answers Are Correct for AZ-120:
HANA: E-Series:
E48s_v4 (384 GB RAM) matches the exact memory requirement, supports ultra disks, and is SAP-certified for HANA (per SAP HANA on Azure guidelines). It’s more cost-effective than M-Series (HANA Large Instances), aligning with “minimize costs” among certified options.
AZ-120 emphasizes selecting VMs from Azure’s SAP-certified list (e.g., SAP Note 2522080).
SCS: B-Series:
B-Series is certified for lightweight SAP components like ASCS/SCS, and its burstable nature keeps costs low for production use. SCS doesn’t need high memory or ultra disks, making B-Series (e.g., B2ms) sufficient and economical.
AZ-120 tests cost-effective sizing for SAP roles beyond HANA.
You have an Azure subscription that contains the resources shown in the following table.
Name Type
RG1 Resource group
VM1 Virtual machine
corpsoftware Azure Storage account
You plan to deploy an SAP production landscape.
You create the following PowerShell Desired State Configuration (DSC) and publish the DSC configuration to corpsoftware.
Configuration JRE {
Import-DscResource -ModuleName xPSDesiredStateConfiguration Package Installer { Ensure = ‘Present’ Name = “Java 8” Path = “\\File01\Software\JreInstall.exe” Arguments = “/x REBOOT=0 SPONSORS=0 REMOVEOUTOFDATEJRES=1 INSTALL_SILENT=1 AUTO_UPDATE=0 EULA=0” ProductId = “26242AE4-039D-4CA4-87B4-2F64180101F0” } }
You need to deploy the DSC configuration to VM1.
How should you complete the command? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
-ResourceGroupName RG1 -VMName VM1 -ArchiveStorageAccountName corpsoftware -ArchiveBlobName ‘JREInstall.ps1.zip’
Import-AzAutomationDscConfiguration
Set-AZAutomationDSCNode
Set-AzVMDscExtension
Set-AzVMExtension
-AutoUpdate -ConfigurationName
Installer
Java 8
JRE
JREInstall
Final Answer Area:
Answer Area
-ResourceGroupName RG1 -VMName VM1 -ArchiveStorageAccountName corpsoftware -ArchiveBlobName ‘JREInstall.ps1.zip’
[Set-AzVMDscExtension]
-AutoUpdate -ConfigurationName
[JRE]
Why These Answers Are Correct for AZ-120:
Set-AzVMDscExtension: This cmdlet is the Azure-native way to deploy DSC to a VM, critical for SAP landscapes where components like Java (e.g., for SAP NetWeaver) must be installed consistently. The provided parameters match its syntax, and it’s practical for production deployments without Automation DSC.
JRE: The configuration name must match the DSC script’s definition (Configuration JRE), ensuring the Java 8 installation is applied to VM1. This aligns with DSC conventions and SAP dependency management in Azure.
Your on-premises network is connected to an SAP HANA deployment in the East US Azure region. The deployment uses the Standard SKU of an ExpressRoute gateway.
You need to implement ExpressRoute FastPath. The solution must meet the following requirements:
- Hybrid connectivity must be maintained if a single datacentre fails in the East US region.
- Hybrid connectivity costs must be minimized.
Which ExpressRoute gateway SKU should you use?
A
High Performance
B
ErGw3Az
C
Ultra Performance
D
ErGw1AZ
Final Answer:
D. ErGw1AZ
Why ErGw1AZ Is Correct:
FastPath: ErGw1AZ supports ExpressRoute FastPath, improving latency for SAP HANA traffic (e.g., from Azure VMs to on-premises).
High Availability: As an AZ SKU, it’s deployed across Availability Zones in East US (e.g., Zone 1, Zone 2), ensuring connectivity if one datacenter fails. This aligns with SAP HANA’s need for reliable hybrid connectivity in production.
Cost Minimization: ErGw1AZ is the cheapest zone-redundant SKU that supports FastPath. SAP HANA deployments (unless specified as Large Instances or massive scale) typically don’t require the higher throughput of ErGw2AZ or ErGw3AZ, making ErGw1AZ sufficient.
You have an on-premises SAP NetWeaver-based ABAP deployment hosted on servers that run Windows Server or Linux.
You plan to migrate the deployment to Azure.
What will invalidate the existing NetWeaver ABAP licenses for each operating system once the servers are migrated to Azure? To answer, drag the appropriate actions to the correct operating systems. Each action may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Actions
- Changing the hostname assigned to the operating system
- Deallocating the Azure virtual machine
- Deleting the Azure virtual machine and recreating a new virtual machine that uses the same disks
- Using the Redeploy option from the Azure portal of the Azure virtual machine
- Replacing the primary NIC
Answer Area
Windows Server:
[Blank space]
[Blank space]
Linux:
[Blank space]
Final Answer Area:
Answer Area
Windows Server:
[Deleting the Azure virtual machine and recreating a new virtual machine that uses the same disks]
[Replacing the primary NIC]
Linux:
[Changing the hostname assigned to the operating system]
Why This Is Correct for AZ-120:
Windows Server:
Deleting and Recreating VM: In Azure, deleting a VM and recreating it generates a new VM ID, invalidating the hardware-bound license. SAP requires a new license key post-recreation, a key consideration in migration planning (AZ-120).
Replacing the primary NIC: While less definitive, SAP’s hardware key can include NIC details. Changing it may trigger a license mismatch, especially in strict configurations. With two slots, this is a reasonable second action per SAP’s conservative licensing model.
Linux:
Changing the hostname: SAP NetWeaver ABAP on Linux uses the hostname as the license anchor (e.g., via SAPSYSTEMNAME or slicense). Changing it post-migration invalidates the license, requiring a new key—a critical AZ-120 concept for SAP on Azure.
You have an Azure subscription that contains two SAP HANA on Azure (Large Instances) deployments named HLI1 and HLI2. HLI1 is deployed to the East US Azure region. HLI2 is deployed to the West US 2 Azure region.
You need to minimize network latency for inter-region communication between HLI1 and HLI2.
What should you implement?
A
a NAT gateway
B
IP routing tables
C
ExpressRoute FastPath
D
ExpressRoute Global Reach
Final Answer:
D. ExpressRoute Global Reach
Why ExpressRoute Global Reach Is Correct:
HANA Large Instances Networking: Each HLI unit is a bare-metal server in an Azure stamp, connected to a customer VNet via a dedicated ExpressRoute circuit (provided by Microsoft as part of the HLI service). Inter-region communication (e.g., East US to West US 2) defaults to Azure’s backbone, but without optimization, it may not be the most direct path.
Global Reach Functionality:
Links the ExpressRoute circuits of East US and West US 2.
Traffic flows over Microsoft’s private, high-speed global network (e.g., via peering locations), reducing hops and latency compared to standard inter-region routing.
Essential for scenarios like HANA System Replication (HSR) between regions, a common SAP HA/DR setup.
Latency Minimization: Global Reach provides the shortest, most predictable path between regions, critical for SAP HANA’s performance-sensitive workloads.
You have an on-premises SAP landscape that uses DB2 databases and contains an SAP Financial Accounting (SAP FIN) deployment. The deployment contains a file share that stores 50 GB of bitmap files.
You plan to migrate the on-premises SAP landscape to SAP HANA on Azure and store the images on Azure Files shares. The solution must meet the following requirements:
- Minimize costs.
- Minimize downtime.
- Minimize administrative effort.
You need to recommend a migration solution.
What should you recommend using to migrate the databases and to check the images? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Migrate the databases:
[Dropdown menu]
- Azure Database Migration Service
- Data Migration Assistant (DMA)
- SAP Database Migration Option (DMO) with System Move
Check the images:
[Dropdown menu]
- AZCopy
- Azure DataBox
- Azure Migrate
Final Answer Area:
Answer Area
Migrate the databases:
[SAP Database Migration Option (DMO) with System Move]
Check the images:
[AZCopy]
Why These Answers Are Correct for AZ-120:
SAP DMO with System Move:
Tailored for SAP migrations (DB2 to HANA), a core AZ-120 focus.
Balances downtime, cost, and effort by leveraging SAP’s native tools and Azure’s infrastructure.
Matches SAP HANA on Azure migration guidelines (e.g., SAP Note 2522080).
AZCopy:
Efficient, cost-effective, and low-effort for small file transfers (50 GB) to Azure Files, a common storage choice for SAP landscapes.
Ensures file integrity post-migration, meeting operational needs with minimal overhead.
Aligns with Azure’s recommended tools for SAP file migrations.
You are planning a small-scale deployment of an SAP HANA on Azure (Large Instances) landscape.
You identify the costs of the virtual machine SKU required to host the HANA Large Instances landscape.
Which additional costs will be incurred?
A
a Linux support contract
B
an ExpressRoute circuit between the HANA Large Instances stamp and Azure
C
a Site-to-Site VPN connection between the HANA Large Instances stamp and Azure
D
an Azure Rapid Response support contract
Final Answer:
A. A Linux support contract
Why “A Linux support contract” Is Correct:
HANA Large Instances Cost Breakdown:
Included: HLI SKU cost (hardware, management, ExpressRoute to VNet).
Customer Responsibility: OS licensing/support (e.g., SUSE or Red Hat for SAP HANA), SAP HANA licensing, and optional Azure support plans.
Linux Support: SAP HANA on Azure (Large Instances) runs on Linux (e.g., SLES or RHEL), and SAP requires an active support contract for the OS in production. This is an additional cost beyond the HLI SKU, as Microsoft doesn’t provide OS licensing or support—it’s the customer’s responsibility.
Requirements Fit:
Small-Scale: Doesn’t eliminate the need for OS support; even a single HLI unit requires it.
Your on-premises network has a 100-Mbps internet connection and contains an SAP production landscape that has 14 TB of data files.
You plan to migrate the on-premises SAP landscape to Azure.
You need to migrate the data files to an Azure Files share. The solution must meet the following requirements:
- Migrate the files within seven days.
- Minimize administrative effort.
- Minimize service outages.
What should you use?
A
Azure Migrate
B
AzCopy
C
Azure Data Box
D
Azure Site Recovery
Final Answer:
C. Azure Data Box
Why Azure Data Box Is Correct:
7-Day Timeline:
Online transfer (e.g., AzCopy) takes ~13 days at 100 Mbps, exceeding the limit.
Data Box timeline (order, copy, ship, upload) fits within 7 days:
Shipping: 2-3 days each way (assume expedited).
Local copy: 14 TB at 100 MB/s (LAN speed, not internet) = ~40 hours.
Azure upload: Fast internal transfer.
Total: ~5-7 days, achievable with planning.
Minimize Administrative Effort:
Azure manages shipping and upload to Azure Files. Customer only copies data locally (e.g., via SMB to Data Box) and returns the device—simpler than managing a multi-day online transfer.
Minimize Service Outages:
Offline transfer via Data Box uses local storage and shipping, avoiding the 100-Mbps internet link. No impact on production SAP traffic, unlike AzCopy.
Azure Files Fit: Data Box supports direct transfer to Azure Files shares, aligning with the target.
You are planning a deployment of SAP on Azure that will use SAP HANA.
You need to ensure that the SAP application servers are in the same datacenter as the HANA nodes.
What should you use?
A
an application group
B
a proximity placement group
C
a resource group
D
a virtual machine scale set
Final Answer:
B. A proximity placement group
Why Proximity Placement Group Is Correct:
SAP HANA Deployment:
If using HANA on Azure VMs (e.g., E-Series, M-Series), a PPG ensures the app servers and HANA VMs are in the same physical datacenter (e.g., same AZ or fault domain).
If using HANA Large Instances (bare-metal), app servers (VMs) in a PPG can be co-located with the HLI stamp’s Azure connectivity point, minimizing latency.
Latency Minimization: PPGs reduce network latency (e.g., <1 ms) by placing resources in the same physical infrastructure, critical for SAP HANA performance (SAP recommends app-to-DB latency <2 ms).
AZ-120 Context: The exam emphasizes optimizing SAP deployments on Azure. PPGs are explicitly recommended in Microsoft’s SAP HANA on Azure documentation (e.g., for HANA VMs or app server proximity to Large Instances) to ensure performance.
Requirements Fit: “Same datacenter” translates to physical proximity in Azure terms, and PPG is the tool designed for this purpose.
You have an Azure subscription.
You need to deploy multiple virtual machines that will host SAP HANA by using an Azure Resource Manager (ARM) template. The solution must meet SAP certification requirements.
How should you complete the template? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
{
“apiVersion”: “2017-06-01”,
“type”: “Microsoft.Network/networkInterfaces”,
“name”: “[parameters(‘vmName’)]”,
“location”: “[resourceGroup().location]”,
“properties”: {
[Dropdown menu] true,
“AuxiliaryMode”:
“enableAcceleratedNetworking”:
“enableIPForwarding”:
"ipConfigurations": [ ... ] } }
{
“type”: “Microsoft.Compute/virtualMachines”,
“name”: “[parameters(‘vmName’)]”,
“location”: “[resourceGroup().location]”,
“properties”: {
“hardwareProfile”: {
“vmSize”: [Dropdown menu]
- “Standard_DS16_v4”
- “Standard_E16”
- “Standard_M64s”
}
}
}
Final Answer Area:
text
Collapse
Wrap
Copy
Answer Area
{
“apiVersion”: “2017-06-01”,
“type”: “Microsoft.Network/networkInterfaces”,
“name”: “[parameters(‘vmName’)]”,
“location”: “[resourceGroup().location]”,
“properties”: {
[enableAcceleratedNetworking] true,
“ipConfigurations”: [
…
]
}
}
{
“type”: “Microsoft.Compute/virtualMachines”,
“name”: “[parameters(‘vmName’)]”,
“location”: “[resourceGroup().location]”,
“properties”: {
“hardwareProfile”: {
“vmSize”: [Standard_M64s]
}
}
}
Why These Answers Are Correct for AZ-120:
“enableAcceleratedNetworking”: Ensures low-latency networking, a SAP HANA certification requirement for Azure VMs. It’s a standard ARM property for NICs supporting HANA’s performance needs.
“Standard_M64s”: A certified, high-memory VM size optimized for SAP HANA (per SAP and Azure guidelines), suitable for multi-VM deployments. It outperforms E16 and excludes uncertified DS16_v4.
Exam Focus: AZ-120 tests SAP-specific configurations in ARM templates, emphasizing certified VM sizes and networking optimizations like accelerated networking for HANA.
You have an on-premises SAP HANA scale-out system with standby node.
You plan to migrate the system to Azure.
You need to configure Azure compute and database resources for the system. The solution must meet the following requirements:
- Support up to 20 TB of memory per node.
- Run on non-shared hardware.
What should you use for each resource? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Compute:
[Dropdown menu]
- An Mv2-series virtual machine
- HANA on Azure (Large Instances) Type I class
- HANA on Azure (Large Instances) Type II class
Database:
[Dropdown menu]
- Azure NetApp Files
- Premium SSD v2 disks
- Ultra disks
Final Answer Area:
Answer Area
Compute:
[HANA on Azure (Large Instances) Type II class]
Database:
[Ultra disks]
Why These Answers Are Correct for AZ-120:
Compute: HANA on Azure (Large Instances) Type II class:
Only Type II (e.g., S960) supports 20 TB memory per node, meeting the scale-out requirement.
Bare-metal ensures non-shared hardware, aligning with SAP HANA certification (SAP Note 2522080).
AZ-120 emphasizes HLI for high-memory SAP deployments.
Database: Ultra disks:
Matches the high-performance storage profile of HLI (e.g., NVMe/SSD used in Type II).
Certified for HANA workloads in Azure, making it the best fit among options, despite HLI’s managed storage.
Reflects AZ-120’s focus on selecting appropriate storage for HANA performance.
You have an Azure subscription.
You plan to deploy an SAP production landscape on Azure.
You need to select a support plan. The solution must meet the following requirements:
Respond to critical impact incidents within one hour.
Minimize costs.
What should you choose?
A. Standard
B. Premier
C. Professional Direct
D. Basic
Final Answer:
C. Professional Direct
Why Professional Direct Is Correct:
1-Hour Response:
Professional Direct offers a 1-hour initial response for Severity A incidents (critical impact, e.g., SAP system outage), meeting the requirement precisely.
Minimize Costs:
At $1,000/month, it’s significantly cheaper than Premier (thousands/month) while still providing production-ready support.
Standard ($100/month) is cheaper but fails the response time, and Basic (free) offers no incident support, making Professional Direct the lowest-cost option that meets both needs.
SAP Production Landscape:
SAP on Azure requires robust support for critical systems (e.g., HANA, NetWeaver). Professional Direct ensures rapid response without the overkill of Premier’s enterprise features (e.g., dedicated manager).
You have an on-premises SAP landscape that is hosted on VMware vSphere and contains 50 virtual machines.
You need to perform a lift-and-shift migration to Azure by using Azure Migrate. The solution must minimize administrative effort.
What should you deploy first?
A
an Azure Backup server
B
an Azure VPN gateway
C
an Azure Migrate configuration server
D
an Azure Migrate process server
Final Answer:
C. An Azure Migrate configuration server
Closest Correct Answer:
C. An Azure Migrate configuration server: Historically, this term might have been used loosely for the appliance in early Azure Migrate docs, though it’s inaccurate for agentless VMware today.
Why: The replication appliance (first deployed) includes configuration server-like functionality (discovery, coordination). Given the options, “configuration server” is the closest match to the initial deployment step, despite the agentless shift in terminology.
Why “Configuration Server” Is Chosen:
Minimize Administrative Effort: Agentless migration (via replication appliance) avoids installing agents on 50 VMs, reducing effort compared to agent-based (where config/process servers are separate).
First Step: Azure Migrate requires discovering the VMware environment before replication. The appliance (deployed first) handles this, and “configuration server” aligns with the coordination role in older contexts.
AZ-120 Context: The exam tests Azure Migrate for SAP migrations. For VMware, deploying the appliance (misnamed as “configuration server” here) is step one, followed by connectivity (e.g., VPN) and replication.
Option Limitation: No “replication appliance” option; “configuration server” is the best fit among listed choices.
You plan to deploy an SAP production landscape on Azure.
You need to estimate how many SAP operations will be processed by the landscape per hour. The solution must minimize administrative effort.
What should you use?
A
SAP Quick Sizer
B
SAP HANA hardware and cloud measurement tools
C
SAP S/4HANA Migration Cockpit
D
SAP GUI
Final Answer:
A. SAP Quick Sizer
Why SAP Quick Sizer Is Correct:
Estimating Operations:
Quick Sizer calculates SAPS based on inputs like number of users, transactions per hour, or business processes (e.g., 100 SAPS ≈ 2,000 dialog steps/hour). This directly correlates to “SAP operations processed per hour.”
Outputs resource needs (e.g., CPU, memory) for Azure sizing.
Minimize Administrative Effort:
Web-based tool (accessible via SAP Support Portal), no installation or complex setup.
Users answer a questionnaire (e.g., “100 sales orders/hour”), and it generates results—automated and simple.
Contrast with alternatives requiring deployment (HWCCT) or manual monitoring (SAP GUI).
SAP on Azure:
SAP Quick Sizer is integrated with Azure planning (e.g., Microsoft provides mappings of SAPS to Azure VM SKUs like M-Series or HANA Large Instances).
Recommended in AZ-120 and Azure SAP documentation for greenfield or migration sizing.
Production Landscape:
Ideal for planning a new deployment, ensuring resources match expected throughput.
You plan to deploy two Azure virtual machines that will host an SAP HANA database for an SAP landscape. The virtual machines will be deployed to the same availability set.
You need to meet the following requirements:
- Ensure that the virtual machines support disk snapshots.
- Ensure that the virtual machine disks provide submillisecond latency for writes.
- Ensure that each virtual machine can be allocated disks from a different storage cluster.
Which type of operating system disk and HANA database disk should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Operating system disk:
[Dropdown menu]
- Azure NetApp Files
- Premium storage
- Ultra disk
HANA database disk:
[Dropdown menu]
- Azure NetApp Files
- Premium storage
- Ultra disk
Final Answer Area:
Answer Area
Operating system disk:
[Premium storage]
HANA database disk:
[Ultra disk]
Why These Answers Are Correct for AZ-120:
OS Disk: Premium storage:
Meets snapshot and storage cluster requirements.
Submillisecond latency isn’t critical for OS disks in SAP HANA deployments; Premium SSD is standard and cost-effective (e.g., for M-Series VMs).
Aligns with Azure’s SAP HANA VM guidelines.
HANA Database Disk: Ultra disk:
Meets all requirements: snapshots, submillisecond latency (<1 ms), and separate storage clusters via availability set fault domains.
Essential for HANA’s write-intensive workloads (e.g., log writes), matching SAP certification standards.
Availability Set: Ensures VMs (and their disks) are in different fault domains, satisfying the “different storage cluster” requirement for both disk types.
AZ-120 Context: Tests storage selection for SAP HANA on Azure VMs, balancing performance (Ultra for DB) and cost (Premium for OS).
You are designing an SAP HANA deployment.
You estimate that the database will be 1.8 TB in three years.
You need to ensure that the deployment supports 60,000 IOPS. The solution must minimize costs and provide the lowest latency possible.
Which type of disk should you use?
A
Standard HDD
B
Standard SSD
C
Ultra disk
D
Premium SSD
Final Answer:
C. Ultra disk
Why Ultra Disk Is Correct:
60,000 IOPS:
Ultra Disk scales IOPS independently (up to 160,000 per disk), meeting 60,000 IOPS with a single 2 TB disk.
Premium SSD requires multiple disks (e.g., 3-4), increasing complexity and cost.
Lowest Latency:
Ultra Disk’s submillisecond latency (<1 ms) beats Premium SSD (~1-2 ms), Standard SSD (~5-10 ms), and Standard HDD (~10-20 ms), satisfying “lowest possible.”
Minimize Costs:
Single Ultra Disk (2 TB, 60,000 IOPS) ≈ $620/month.
Premium SSD (e.g., 3 × 16 TB for 60,000 IOPS) ≈ $6,000/month—far more expensive due to overprovisioning.
Ultra is the cheapest option that meets IOPS and latency.
1.8 TB Size: Ultra Disk (e.g., 2 TB) covers this with room for growth.
SAP HANA Fit: Certified for HANA on Azure VMs (e.g., M-Series), recommended for data/log volumes due to high IOPS and low latency (SAP Note 2522080).
AZ-120 Context: Tests storage optimization for HANA, balancing performance (IOPS, latency) and cost. Ultra Disk’s flexibility makes it ideal.
You are designing the backup solution for an SAP database.
You have an Azure Storage account that is configured as shown in the following exhibit.
Account kind
StorageV2 (general purpose v2)
Performance
● Standard (x) ○ Premium
ℹ This setting cannot be changed after the storage account is created.
Secure transfer required
○ Disabled ● Enabled (x)
Allow Blob public access
● Disabled (x) ○ Enabled
Allow storage account key access
○ Disabled ● Enabled (x)
Allow recommended upper limit for shared access signature (SAS) expiry interval
○ Disabled ● Enabled (x)
Default to Azure Active Directory authorization in the Azure portal
● Disabled (x) ○ Enabled
Minimum TLS version
[Dropdown menu: Version 1.2]
Blob access tier (default)
○ Cool ● Hot (x)
Replication
[Dropdown menu: Geo-redundant storage (GRS)]
Large file shares
● Disabled (x) ○ Enabled
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
Answer Area
Data in the storage account is stored on [answer choice].
[Dropdown menu options:]
- hard disk drives (HDDs)
- premium solid-state drives (SSDs)
- standard solid-state drives (SSDs)
Backups will be replicated [answer choice].
[Dropdown menu options:]
- to a storage cluster in the same datacenter
- to another Azure region
- to another zone within the same Azure region
Final Answers:
Data in the storage account is stored on: Hard disk drives (HDDs)
Backups will be replicated: To another Azure region
Correct Answer: Hard disk drives (HDDs)
Why It’s Correct:
Performance Tier: The storage account is configured with Standard performance, which uses hard disk drives (HDDs) as the underlying storage medium. Standard-tier storage accounts (e.g., StorageV2 with Standard performance) are designed for cost-effective, general-purpose storage and rely on HDDs, unlike Premium-tier accounts, which use SSDs.
Account Kind: StorageV2 (general purpose v2) supports multiple data types (blobs, files, queues, tables) and can be Standard or Premium, but the Standard selection confirms HDD-based storage.
Comparison:
Premium solid-state drives (SSDs): Used in Premium performance tier (e.g., Premium Block Blob or Premium File Shares), not applicable here due to the Standard setting.
Standard solid-state drives (SSDs): Azure offers Standard SSDs as managed disks for VMs (e.g., E-series disks), but storage accounts don’t use this category; Standard storage accounts use HDDs.
SAP Context: For SAP database backups, Standard StorageV2 with HDDs is suitable for cost-effective, long-term storage (e.g., blob backups), though performance-critical workloads (e.g., HANA) typically use SSDs for live data.
Correct Answer: To another Azure region
Why It’s Correct:
Replication Setting: The storage account uses Geo-redundant storage (GRS), which replicates data to a secondary Azure region (e.g., East US data is replicated to West US). GRS provides redundancy across regions for disaster recovery, ensuring backups are available if the primary region fails.
How GRS Works:
Data is first replicated synchronously to three availability zones or data centers within the primary region (local redundancy).
Then, it’s replicated asynchronously to a secondary region hundreds of miles away.
The statement asks where backups “will be replicated,” and GRS’s defining feature is the secondary region replication.
Comparison:
To a storage cluster in the same datacenter: Applies to Locally Redundant Storage (LRS), which replicates within one data center, not GRS.
To another zone within the same Azure region: Applies to Zone-Redundant Storage (ZRS), which replicates across availability zones in the same region, not GRS.
SAP Context: For SAP production landscapes, GRS is often used for backups to ensure regional resilience, aligning with high-availability and disaster recovery goals.
You have an Azure subscription that contains an SAP HANA on Azure (Large Instances) deployment.
The deployment is forecasted to require an additional 256 GB of storage.
What is the minimum amount of additional storage you can allocate?
A
256 GB
B
512 GB
C
1 TB
D
2 TB
Final Answer:
C. 1 TB
Why 1 TB Is Correct:
Minimum Increment: Microsoft’s HANA Large Instances storage expansion is standardized at 1 TB increments to maintain performance, consistency, and manageability (e.g., RAID configurations, NVMe/SSD allocation).
256 GB Requirement: While the forecast is 256 GB, the smallest additional allocation available is 1 TB. Customers request this via a support ticket, and Microsoft provisions it.
AZ-120 Context: The exam tests understanding of HLI specifics, including storage constraints. Unlike Azure VMs (where Ultra/Premium disks can scale granularly), HLI uses fixed tiers, and 1 TB is the documented minimum for additional storage.
Cost/Practicality: 1 TB is the smallest unit that meets and exceeds 256 GB, avoiding custom provisioning not offered by Microsoft.
You have an Azure subscription. The subscription contains two virtual machines named SQL1 and SQL2 that host a Microsoft SQL Server 2019 Always On availability group named AOG1.
You plan to deploy an SAP NetWeaver system that will have a database tier hosted on AOG1.
You need to configure networking for SQL1 and SQL2. The solution must meet the following requirements:
- Eliminate the need to create a distributed network name (DNN).
- Minimize costs.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Answer Area
Deploy SQL1 and SQL2 to:
[Dropdown menu options:]
- The same subnet on a virtual network
- Two different subnets on the same virtual network
- Two different virtual networks
Configure IP addressing by:
[Dropdown menu options:]
- Assigning two different IP addresses to the availability group listener
- Assigning two IP addresses to the primary network interface on each virtual machine
- Creating two network interfaces on each virtual machine and assigning a different IP address to each interface
Final Answer Area:
Answer Area
Deploy SQL1 and SQL2 to:
[The same subnet on a virtual network]
Configure IP addressing by:
[Assigning two different IP addresses to the availability group listener]
Why These Answers Are Correct for AZ-120:
Same Subnet:
VNN with ILB requires SQL1 and SQL2 in the same subnet for the backend pool, eliminating DNN (which supports multi-subnet).
Minimizes cost by avoiding VNet peering or extra subnets.
Aligns with SAP HA setups on Azure (SQL AG with ILB).
Two IPs to Listener:
VNN uses an ILB with a single listener IP, but the “two IPs” option likely reflects a misphrased intent (e.g., listener IP + probe IP or multi-subnet confusion). It’s the closest match to VNN configuration avoiding DNN.
Minimizes cost by not adding NICs or excessive IPs to VMs.
AZ-120 Context: Tests SQL Server AG networking for SAP. VNN+ILB is a standard, cost-effective HA solution when DNN is excluded.
You have an on-premises network and an Azure subscription.
You plan to deploy a standard three-tier SAP architecture to a new Azure virtual network.
You need to configure network isolation for the virtual network. The solution must meet the following requirements:
- Allow client access from the on-premises network to the presentation servers.
- Only allow the application servers to communicate with the database servers.
- Only allow the presentation servers to access the application servers.
- Block all other inbound traffic.
What is the minimum number of network security groups (NSGs) and subnets required? To answer, drag the appropriate number to the correct targets. Each number may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Number Answer Area
——————————–
1
2 NSGs: [____]
3
4 Subnets: [____]
Final Answer Area:
Answer Area
NSGs: [3]
Subnets: [3]
Why This Is Correct for AZ-120:
Subnets: 3:
Matches SAP’s three-tier architecture (presentation, application, database), a standard in Azure deployments (SAP Note 2522080).
Enables isolation and meets requirements (e.g., app-to-DB only) by segmenting tiers.
Minimum number to satisfy the traffic flow rules without overlap.
NSGs: 3:
One NSG per subnet provides precise control:
Presentation: On-premises access only.
Application: Presentation access only, outbound to DB.
Database: Application access only.
Blocks all other inbound traffic with deny rules (default deny-all applies after allow rules).
Aligns with Azure best practices for SAP network security.
AZ-120 Context: Tests SAP network design on Azure, emphasizing isolation and security with minimal complexity. Three subnets with three NSGs is the standard, efficient solution.
You have an SAP landscape on Azure that contains the virtual machines shown in the following table.
Name Role Azure Availability Zone in East US
SAPAPP1 Application Server Zone 1
SAPAPP2 Application Server Zone 2
You need to ensure that the Application Server role is available if a single Azure datacenter fails.
What should you include in the solution?
A
Azure Virtual WAN
B
Azure Basic Load Balancer
C
Azure Application Gateway v2
D
Azure AD Application Proxy
Final Answer:
B. Azure Basic Load Balancer
Why Azure Basic Load Balancer Is Correct:
Zonal HA:
Basic Load Balancer (zone-redundant configuration) places its frontend IP across all zones in East US.
Backend pool includes SAPAPP1 (Zone 1) and SAPAPP2 (Zone 2).
If Zone 1 fails, traffic is automatically routed to SAPAPP2 in Zone 2, ensuring the Application Server role remains available.
SAP NetWeaver:
Application servers handle client requests (e.g., SAP GUI, RFC). A load balancer distributes traffic (e.g., port 3200 for instance 00) across instances, a standard SAP HA pattern on Azure.
Minimize Costs:
Basic Load Balancer is cheaper than Application Gateway and sufficient for Layer 4 needs (SAP doesn’t require Layer 7 features here).
AZ-120 Context:
The exam emphasizes HA solutions for SAP tiers. Basic Load Balancer is recommended for SAP application servers in Azure documentation (e.g., SAP on Azure HA guide), especially with Availability Zones.
Existing Setup: VMs are already in different zones, so no clustering (e.g., Windows Failover Clustering) is implied—just load balancing.
You have an SAP ERP Central Component (SAP ECC) deployment on Azure virtual machines. The virtual machines run Windows Server 2022 and are members of an Active Directory domain named contoso.com.
You install SAP GUI on an Azure virtual machine named VM1 that runs Windows 10.
You need to ensure that contoso.com users can sign in to SAP ECC via SAP GUI on VM1 by using their domain credentials.
What should you do? To answer, drag the appropriate components to the correct tasks. Each component may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Components Answer Area
—————————————————————
ABAP Central Services (ASCS) Modify the instance profile for: [_________]
Primary Application Server (PAS) Run the SNC Kerberos Configuration for SAP GUI on: [_________]
SAP Web Dispatcher
VM1 Configure SAP Logon on: [_________]
Final Answer Area:
Answer Area
Modify the instance profile for: [Primary Application Server (PAS)]
Run the SNC Kerberos Configuration for SAP GUI on: [VM1]
Configure SAP Logon on: [VM1]
Why This Is Correct for AZ-120:
PAS for Instance Profile:
SNC/Kerberos SSO requires server-side configuration in the SAP system’s instance profile, specifically on the application server (PAS) handling dialog work processes. This is standard for ABAP systems like ECC (SAP Note 352295).
ASCS profiles are more for central services, not user logins.
VM1 for SNC Kerberos Configuration:
Client-side SNC setup occurs on the machine running SAP GUI (VM1). Installing the Kerberos library and configuring it for contoso.com enables SSO, minimizing user effort.
VM1 for SAP Logon:
SAP Logon configuration on VM1 ties the client to the SAP system with SNC enabled, completing the SSO chain from AD to ECC.
AZ-120 Context: The exam tests SAP integration with Azure AD domains for SSO. This solution leverages Kerberos (common for Windows-based SAP ECC) and aligns with Azure SAP deployment best practices.