test3 Flashcards

https://www.secexams.com/exams/Microsoft/az-120/view/ (88 cards)

1
Q

You are migrating SAP to Azure. The ASCS application servers are in one Azure zone, and the SAP database server in in a different Azure zone. ASCS/ERS is configured for high availability.
During performance testing, you discover increased response times in Azure, even though the Azure environment has better computer and memory configurations than the on-premises environment.
During the initial analysis, you discover an increased wait time for Enqueue.
What are three possible causes of the increased wait time? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A
a missing Enqueue profile
B
disk I/O during Enqueue backup operations
C
misconfigured load balancer rules and health check probes for Enqueue and ASCS
D
active Enqueue replication
E
network latency between the database server and the SAP application servers

A

Correct Answers:
C. Misconfigured load balancer rules and health check probes for Enqueue and ASCS
D. Active Enqueue replication
E. Network latency between the database server and the SAP application servers
Why These Are Correct:
C - Misconfigured Load Balancer:
In Azure HA for ASCS/ERS, the load balancer is critical. Misconfigured rules or probes can delay or misroute Enqueue traffic, directly increasing wait times. This is a frequent migration challenge tested in AZ-120, reflecting Azure-specific HA nuances.
D - Active Enqueue Replication:
Replication to ERS is inherent in HA. If not optimized (e.g., synchronous or network-constrained across zones), it slows Enqueue processing, a realistic cause in Azure SAP deployments and a key performance consideration.
E - Network Latency:
Cross-zone latency in Azure (higher than on-premises) impacts Enqueue wait times when database interactions are involved. This is a common bottleneck in distributed SAP setups, making it a focal point for AZ-120 troubleshooting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have an on-premises SAP environment that uses AIX servers and IBM DB2 as the database platform.
You plan to migrate SAP to Azure. In Azure, the SAP workloads will use Windows Server and Microsoft SQL Server as the database platform.
What should you use to export from DB2 and import the data to SQL Server?
A
R3load
B
Azure SQL Data Warehouse
C
SQL Server Management Studio (SSMS)
D
R3trans

A

Final Answer:
A. R3load

Why Correct?
SAP Migration Fit: R3load is specifically designed for SAP database migrations, including heterogeneous scenarios like moving from IBM DB2 on AIX to Microsoft SQL Server on Windows Server. It handles the export from DB2 and import into SQL Server seamlessly.
End-to-End Solution: It provides a complete process—exporting DB2 data into files and importing them into SQL Server—while preserving SAP data structures and integrity.
Minimized Data Loss: R3load ensures all data is transferred accurately, critical for SAP production systems.
AZ-120 Context: The exam focuses on SAP-specific tools and processes for Azure migrations. R3load is part of SAP’s Migration Tools and is recommended in Azure’s SAP workload documentation for such migrations (e.g., classical migration or DMO).
Industry Standard: Widely used in SAP migrations to Azure when changing database platforms, aligning with best practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT -
You are designing the backup for an SAP database.
You have an Azure Storage account that is configured as shown in the following exhibit.
The cost of your storage account depends on the usage and the options you choose below.

Learn more

Account kind
StorageV2 (general purpose v2)

Performance
[ Standard ] Premium

  • Secure transfer required
    Disabled [ Enabled ]

Access tier (default)
[ Cool ] Hot

Replication
[ Geo-redundant storage (GRS) ▼ ]

Azure Active Directory authentication for Azure Files (Preview)
[ Disabled ] Enabled

Data Lake Storage Gen2
Hierarchical namespace
[ Disabled ] Enabled
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area

Data in the storage account is stored on
[answer choice].

- hard disk drives (HDDs)
- premium solid-state drives (SSDs)
- standard solid-state drives (SSDs)

Backups will be replicated
[answer choice].

- to a storage cluster in the same datacenter
- to another Azure region
- to another zone within the same Azure region

A

Final Answers:
Data in the storage account is stored on: Standard solid-state drives (SSDs)
Correct because the Standard performance tier in a general-purpose v2 account uses Standard SSDs, especially in the context of SAP workloads.
Backups will be replicated: To another Azure region
Correct because GRS replication involves copying data to a secondary Azure region for geographic redundancy.

Answer: Standard solid-state drives (SSDs)

Reasoning:

The “Performance” setting in the exhibit is set to “Standard.” In Azure, a “Standard” performance tier for a general-purpose v2 storage account typically uses standard solid-state drives (SSDs) for managed disks when associated with workloads like backups. While Azure’s Standard tier historically used HDDs for blob storage in older configurations, the context of modern Azure deployments (especially for SAP workloads and the AZ-120 exam) aligns with Standard SSDs as the baseline for general-purpose v2 accounts with standard performance. Standard SSDs are optimized for cost-effective performance, which fits the scenario of designing a backup solution for an SAP database.
“Premium solid-state drives (SSDs)” would apply if the Performance tier were set to “Premium,” which is not the case here.
“Hard disk drives (HDDs)” are associated with older Standard HDD offerings, but for SAP backup scenarios in Azure, Microsoft recommends at least Standard SSDs for consistency and performance, especially in a general-purpose v2 account.

Answer: To another Azure region

Reasoning:

The “Replication” setting in the exhibit is set to “Geo-redundant storage (GRS).” GRS in Azure means that data is replicated synchronously three times within the primary region (using locally redundant storage, LRS) and then replicated asynchronously to a secondary Azure region, which is typically hundreds of miles away. This ensures geographic redundancy for disaster recovery purposes, a key consideration for SAP database backups.
“To a storage cluster in the same datacenter” aligns with Locally Redundant Storage (LRS), which is not selected here.
“To another zone within the same Azure region” aligns with Zone-Redundant Storage (ZRS), which is also not selected.
Therefore, “to another Azure region” is the correct choice, as it matches the GRS replication behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

DRAG DROP -
You migrate SAP ERP Central Component (SAP ECC) production and non-production landscapes to Azure.
You are licensed for SAP Landscape Management (LaMa).
You need to refresh from the production landscape to the non-production landscape.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Actions Answer Area

From the Azure portal, create a service principal [ < > ]
From the Cloud Managers tab in LaMa, add an adapter [ < > ]
From SAP Solution Manager, deploy the LaMa adapter [ < > ]
Add permissions to the service principal [ < > ]
Install and configure LaMa on an SAP NetWeaver instance [ < > ]

                                     [ ▲ ]  
                                     [ ▼ ]
A

Correct Sequence:
Install and configure LaMa on an SAP NetWeaver instance
From the Azure portal, create a service principal
Add permissions to the service principal
From the Cloud Managers tab in LaMa, add an adapter
Why It’s Correct:
Step 1: Install and configure LaMa on an SAP NetWeaver instance:
LaMa must be operational before any Azure integration or refresh can occur. It’s deployed on a NetWeaver instance (e.g., in Azure or on-premises), a prerequisite for all subsequent steps.
Step 2: From the Azure portal, create a service principal:
LaMa needs a service principal to authenticate with Azure, created after LaMa is ready to be configured for cloud operations.
Step 3: Add permissions to the service principal:
Permissions (e.g., Contributor role) are assigned to the service principal to allow LaMa to manage Azure VMs, a necessary step before connecting LaMa to Azure.
Step 4: From the Cloud Managers tab in LaMa, add an adapter:
The Azure adapter links LaMa to Azure using the service principal, enabling the refresh operation (e.g., cloning production to non-production). This is the final setup step before executing the refresh.
Alignment with Case Study:
Enables LaMa to refresh the ECC landscape in Azure, meeting the migration context and LaMa licensing.
AZ-120 Relevance:
Tests knowledge of LaMa setup and Azure integration for SAP operations, a key skill in managing SAP landscapes on Azure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

HOTSPOT -
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area

Oracle Real Application Clusters (RAC) can be used to
provide high availability of SAP databases on Azure.
[ ] Yes [ ] No

You can host SAP databases on Azure by using Oracle on a
virtual machine that runs Windows Server 2016.
[ ] Yes [ ] No

You can host SAP databases on Azure by using Oracle on a
virtual machine that runs SUSE Linux Enterprise Server 12
(SLES 12).
[ ] Yes [ ] No

A

Final Answers:
Oracle Real Application Clusters (RAC) can be used to provide high availability of SAP databases on Azure: No
RAC is not supported for SAP on Azure; Oracle Data Guard is the recommended HA solution.
You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016: Yes
Windows Server 2016 is a supported OS for Oracle databases in SAP deployments on Azure.
You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12): Yes
SLES 12 is a certified and supported OS for Oracle-based SAP databases on Azure.

Statement 1:
“Oracle Real Application Clusters (RAC) can be used to provide high availability of SAP databases on Azure.”

Options: [ ] Yes [ ] No

Answer: No

Reasoning:

Oracle Real Application Clusters (RAC) is a clustered database solution that provides high availability and scalability by allowing multiple nodes to access a single database. However, as of the latest Azure documentation for SAP workloads, Oracle RAC is not supported on Azure for SAP-certified deployments.
Microsoft Azure relies on alternative high-availability (HA) solutions for SAP databases, such as Oracle Data Guard, which is supported for Oracle databases on Azure. Oracle Data Guard provides HA and disaster recovery through standby databases, which aligns with Azure’s infrastructure and SAP certification requirements.
RAC requires shared storage and low-latency clustering, which is challenging to implement in Azure’s virtualized environment due to the lack of native support for Oracle RAC-specific features like shared disk access across VMs. Azure instead recommends using Azure Availability Zones, VM-level HA, or Oracle Data Guard for SAP Oracle deployments.
For the AZ-120 exam, you need to know that RAC is not a supported or recommended HA option for SAP on Azure, making “No” the correct answer.
Statement 2:
“You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016.”

Options: [ ] Yes [ ] No

Answer: Yes

Reasoning:

Azure supports running Oracle databases on virtual machines (VMs) for SAP workloads, and Windows Server 2016 is a supported operating system for hosting Oracle databases in this context.
Microsoft’s SAP on Azure documentation confirms that Oracle databases (e.g., Oracle 12c, 19c) can be deployed on Azure VMs running supported Windows Server versions, including Windows Server 2016. This is part of Azure’s flexibility in supporting SAP systems, including SAP NetWeaver and S/4HANA, with Oracle as the backend database.
The VM must meet SAP and Oracle’s sizing and certification requirements (e.g., certified Azure VM types like M-series or E-series), but there’s no restriction against using Windows Server 2016 for this purpose.
For the AZ-120 exam, this is a straightforward validation of Azure’s support for Oracle on Windows VMs, making “Yes” the correct answer.
Statement 3:
“You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12).”

Options: [ ] Yes [ ] No

Answer: Yes

Reasoning:

SUSE Linux Enterprise Server 12 (SLES 12) is a fully supported operating system for running SAP workloads on Azure, including SAP databases like Oracle.
Microsoft and SAP certify specific Linux distributions for SAP deployments on Azure, and SLES 12 is explicitly listed as a supported OS for Oracle databases in SAP environments. This includes support for SAP NetWeaver and S/4HANA systems using Oracle as the database layer.
Azure documentation and SAP Notes (e.g., SAP Note 2039619) confirm that Oracle on SLES 12 is a valid configuration, provided the VM is properly sized and configured (e.g., using Azure M-series VMs) and the Oracle version (e.g., 12c, 19c) is SAP-certified.
For the AZ-120 exam, this reflects Azure’s broad support for Linux distributions in SAP scenarios, making “Yes” the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You have an SAP environment that is managed by using VMware vCenter.
You plan to migrate the SAP environment to Azure.
You need to gather information to identify which compute resources are required in Azure.
What should you use to gather the information?
A
Azure Migrate and SAP EarlyWatch Alert reports
B
Azure Site Recovery and SAP Quick Sizer
C
SAP Quick Sizer and SAP HANA system replication
D
Azure Site Recovery Deployment Planner and SAP HANA Cockpit

A

Correct Answer: A. Azure Migrate and SAP EarlyWatch Alert reports
Why It’s Correct:
Azure Migrate:
Assesses VMware VMs hosting the SAP environment, collecting real-time performance data (CPU, memory, disk) via integration with vCenter. It recommends Azure VM sizes (e.g., M-series for HANA, E-series for app servers) tailored to SAP workloads when configured with SAP-specific settings. It’s the primary Microsoft tool for migration assessment in Azure, directly addressing compute resource identification.
SAP EarlyWatch Alert reports:
Provides SAP-specific utilization data (e.g., peak CPU, memory usage) from the current environment, complementing Azure Migrate’s VM-level insights with application-level detail. This ensures the Azure compute resources (e.g., vCPUs, RAM) match SAP workload demands, critical for production systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You plan to migrate an SAP environment to Azure.
You need to recommend a solution to migrate the SAP application servers to Azure. The solution must minimize downtime and changes to the environments.
What should you include in the recommendation?
A
Azure Storage Explorer
B
Azure Import/Export service
C
AzCopy
D
Azure Site Recovery

A

Correct Answer: D. Azure Site Recovery
Why It’s Correct:
Minimize Downtime:
ASR replicates SAP application server VMs to Azure in the background while the source remains operational. The planned failover cuts over to Azure with minimal disruption (e.g., minutes), far less than offline methods like AzCopy or Import/Export.
Minimize Changes:
ASR performs a lift-and-shift migration, preserving the VM’s OS, SAP application configuration, and settings. No reinstallation or major reconfiguration is needed, unlike file-based tools requiring redeployment.
Comparison:
Azure Storage Explorer, AzCopy: File transfer tools requiring manual redeployment, causing hours/days of downtime and significant changes.
Azure Import/Export: Offline disk shipping, with days of downtime and full environment rebuild.
ASR: Live replication with a short cutover, maintaining the original setup.
AZ-120 Relevance:
The AZ-120 exam emphasizes migration strategies for SAP on Azure. ASR is a standard, SAP-supported tool for migrating application servers (Windows or Linux) with minimal downtime and changes, making it the best fit for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You plan to migrate an on-premises SAP development system to Azure.
Before the migration, you need to check the usage of the source system hardware, such as CPU, memory, network, etc.
Which transaction should you run from SAP GUI?
A
SM51
B
DB01
C
DB12
D
OS07N

A

Final Answer:
D. OS07N

Why OS07N Is Correct:
Purpose: OS07N (Operating System Monitor) provides a comprehensive view of the underlying hardware performance of the SAP system’s host, including:
CPU: Utilization percentage and load.
Memory: Physical and virtual memory usage.
Network: Network I/O and throughput.
Disk: I/O operations and storage performance.
Relevance to Migration: For an Azure migration, you need to assess the source system’s resource utilization to right-size Azure VMs (e.g., E-series, M-series) and ensure they meet the SAP system’s demands. OS07N delivers the exact data required for this purpose.
AZ-120 Context: The exam tests your ability to use SAP tools for migration planning. OS07N aligns with this objective by enabling hardware usage analysis, which complements tools like the SAP EarlyWatch Alert report for sizing decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your company has an SAP environment that contains the following components:
✑ SAP systems based on SAP HANA and SAP Adaptive Server Enterprise (SAP ASE) that run on SUSE Linux Enterprise Server 12 (SLES 12)
✑ Multiple SAP applications
The company plans to migrate all the applications to Azure.
You need to get a comprehensive list of all the applications that are part of the SAP environment.
What should you use?
A
the SAP license information
B
the SAP Solution Manager
C
the data volume management report
D
the network inventory and locations

A

Correct Answer: B. The SAP Solution Manager
Why It’s Correct:
SAP Solution Manager:
SolMan’s landscape management capabilities (e.g., SLD, LMDB) provide a comprehensive, real-time list of all SAP applications in the environment, including systems on HANA and ASE (e.g., ECC, BW, custom apps). It tracks system IDs, versions, and dependencies, making it ideal for migration planning to Azure.
It’s a standard SAP tool for managing and documenting the entire SAP landscape, directly addressing the need for a “comprehensive list of all the applications.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to migrate an SAP HANA instance to Azure.
You need to gather CPU metrics from the last 24 hours from the instance.
Solution: You query views from SAP HANA Studio.
Does this meet the goal?
A
Yes
B
No

A

Final Answer:
A. Yes

Why “Yes” Is Correct:
Functionality: SAP HANA Studio provides access to system views that store CPU metrics, such as M_HOST_RESOURCE_UTILIZATION, which can cover the last 24 hours if data collection is enabled (a reasonable assumption for a production or development system).
AZ-120 Relevance: The exam expects you to leverage SAP HANA tools like Studio for performance monitoring and migration planning. Querying views is a practical and supported approach to extract the required metrics.
Goal Alignment: The solution delivers CPU metrics, which are essential for sizing Azure VMs (e.g., ensuring the VM’s vCPUs match the HANA instance’s workload).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a complex SAP environment that has both ABAP- and Java-based systems. The current on-premises landscapes are based on SAP NetWeaver 7.0
(Unicode and Non-Unicode) running on Windows Server and Microsoft SQL Server.
You need to migrate the SAP environment to a HANA-certified Azure environment.
Solution: You deploy a new environment to Azure that uses SAP NetWeaver 7.4. You export the databases from the on-premises environment, and then you import the databases into the Azure environment.
Does this meet the goal?
A
Yes
B
No

A

Correct Answer: B. No
Why It’s Correct:
HANA-Certified Azure Environment:
Requires SAP HANA as the database, not SQL Server. The solution’s export/import process implies a homogeneous migration (SQL Server to SQL Server) unless a HANA conversion is specified, which it isn’t.
Deploying NetWeaver 7.4 in Azure is fine, but keeping SQL Server misses the HANA requirement.
Ambiguity:
If the intent was to migrate to HANA, tools like SAP DMO or SWPM (with R3load) for a heterogeneous system copy would be needed, but these aren’t mentioned. The solution lacks the necessary database transformation step.
AZ-120 Relevance:
The AZ-120 exam tests precise understanding of SAP migrations to Azure, especially HANA-based environments. A HANA-certified environment explicitly means HANA as the DB, and this solution doesn’t ensure that transition from SQL Server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a complex SAP environment that has both ABAP- and Java-based systems. The current on-premises landscapes are based on SAP NetWeaver 7.0
(Unicode and Non-Unicode) running on Windows Server and Microsoft SQL Server.
You need to migrate the SAP environment to a HANA-certified Azure environment.
Solution: You upgrade to SAP NetWeaver 7.4, and then you migrate SAP to Azure by using Azure Site Recovery.
Does this meet the goal?
A
Yes
B
No

A

Correct Answer:
B. No

Why This Answer Is Correct for AZ-120:
HANA Requirement: The AZ-120 exam emphasizes understanding SAP HANA deployments on Azure. A HANA-certified environment requires SAP HANA as the database, not SQL Server. ASR performs a lift-and-shift, preserving the existing SQL Server database, which fails the goal.
Migration Process: Upgrading to NetWeaver 7.4 is a reasonable preparatory step (HANA requires at least NetWeaver 7.0 SP12 or higher, with 7.4 being fully optimized), but the migration to Azure must include a database conversion to HANA. Tools like SAP Database Migration Option (DMO) or System Copy (export/import) are typically used for this, not ASR alone.
ASR Limitations: Microsoft documentation for SAP on Azure specifies ASR for VM-level migration (e.g., for HA/DR or non-database changes), but database migrations to HANA require SAP-specific tools. The solution omits this critical step.
Exam Context: AZ-120 tests knowledge of SAP migration strategies. A correct solution might involve upgrading to 7.4, then using DMO to migrate to HANA on Azure, or performing a heterogeneous migration with SAP tools, not just ASR.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a complex SAP environment that has both ABAP- and Java-based systems. The current on-premises landscapes are based on SAP NetWeaver 7.0
(Unicode and Non-Unicode) running on Windows Server and Microsoft SQL Server.
You need to migrate the SAP environment to a HANA-certified Azure environment.
Solution: You migrate SAP to Azure by using Azure Site Recovery, and then you upgrade to SAP NetWeaver 7.4.
Does this meet the goal?
A
Yes
B
No

A

Final Answer
Does this meet the goal?

B. No

Why “No” is Correct?
Database Mismatch: The goal requires SAP HANA, but the solution keeps SQL Server, failing the HANA-certified criterion.
Incomplete Process: ASR and a NetWeaver upgrade alone don’t achieve a HANA-based system; a database conversion step is essential but absent.
AZ-120 Context: The exam often tests your understanding of SAP HANA migration nuances, and this scenario highlights the limitation of lift-and-shift tools like ASR for database platform changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

HOTSPOT -
A company named Contoso, Ltd. has users across the globe. Contoso is evaluating whether to migrate SAP to Azure.
The SAP environment runs on SUSE Linux Enterprise Server (SLES) servers and SAP HANA databases. The Suite on HANA database is 4 TB.
You need to recommend a migration solution to migrate SAP application servers and the SAP HANA databases. The solution must minimize downtime.
Which migration solutions should you recommend? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area

SAP application servers:
[Dropdown options]
- AzCopy
- Azure Site Recovery
- SAP HANA system replication
- System Copy for SAP Systems

SAP HANA databases:
[Dropdown options]
- AzCopy
- Azure Site Recovery
- SAP HANA system replication
- System Copy for SAP Systems

A

Correct Answers:
SAP application servers: Azure Site Recovery
SAP HANA databases: SAP HANA system replication

Why These Answers Are Correct for AZ-120:
SAP Application Servers: Azure Site Recovery
Why Correct: ASR replicates the SLES VMs hosting SAP NetWeaver to Azure with continuous sync, allowing a cutover with minimal downtime (minutes). It’s a lift-and-shift approach supported by Microsoft for SAP application server migrations (e.g., SAP Note 2529073).
Downtime: Near-zero, as replication happens in the background, and only the final failover disrupts service briefly.
Exam Context: AZ-120 emphasizes ASR for VM-level SAP migrations, especially for application servers on supported OS like SLES.
SAP HANA Databases: SAP HANA System Replication
Why Correct: HANA System Replication synchronizes the 4 TB database to a secondary HANA instance in Azure in real-time. After full sync, a brief cutover (seconds to minutes) completes the migration, meeting the downtime requirement. It’s SAP’s recommended method for large HANA databases on Azure.
Downtime: Minimal, as data is replicated live, and only the switchover causes a brief outage.
Exam Context: AZ-120 tests knowledge of HANA-specific migration tools. Microsoft and SAP documentation highlight HANA System Replication for minimal-downtime migrations (e.g., Azure SAP workload guides).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You have an on-premises SAP environment hosted on VMware VSphere that uses Microsoft SQL Server as the database platform.
You plan to migrate the environment to Azure. The database platform will remain the same.
You need gather information to size the target Azure environment for the migration.
What should you use?
A
the SAP EarlyWatch Alert report
B
Azure Advisor
C
the SAP HANA sizing report
D
Azure Stack Edge

A

Final Answer
What should you use?

A. the SAP EarlyWatch Alert report

Why “SAP EarlyWatch Alert Report” is Correct?
SAP-Specific Insights: EarlyWatch provides detailed, SAP-centric performance data (e.g., SAPS, database I/O) tailored to the current environment (NetWeaver on SQL Server), which is critical for sizing Azure VMs and storage.
Pre-Migration Applicability: It runs on the on-premises system, making it ideal for gathering data before the migration, unlike Azure-native tools that require an Azure deployment.
SQL Server Compatibility: Since the database stays SQL Server, EarlyWatch’s metrics align with the target Azure environment (e.g., SQL Server on Azure VMs), ensuring accurate sizing.
AZ-120 Relevance: The exam often highlights EarlyWatch as the go-to tool for SAP migration planning, especially when retaining the same database platform.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have an existing SAP production landscape that uses SAP HANA databases.
You plan to migrate the landscape to Azure.
Which Azure virtual machine series will be Azure supported for the production SAP HANA database deployment?
A
F-Series
B
A-Series
C
M-Series
D
N-Series

A

Correct Answer:
C. M-Series

Why This Answer Is Correct for AZ-120:
SAP HANA Certification: The AZ-120 exam requires knowledge of Azure VM types supported for SAP workloads. SAP HANA in production has strict requirements for memory (in-memory processing), CPU, and storage performance (e.g., submillisecond latency for logs). The M-Series VMs (including Mv2-Series) are explicitly certified by SAP and Microsoft for production HANA deployments on Azure, as per SAP Note 2529073 and the Azure SAP workload planning guide.
Production Support: M-Series VMs support features critical for HANA, such as:
Large memory configurations (e.g., 1 TB to 5.7 TB) for HANA’s in-memory needs.
Premium SSD and Ultra Disk for high IOPS and low latency (e.g., Write Accelerator for /hana/log).
Availability in HANA-certified configurations (e.g., scale-up and scale-out).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a complex SAP environment that has both ABAP- and Java-based systems. The current on-premises landscapes are based on SAP NetWeaver 7.0
(Unicode and Non-Unicode) running on Windows Server and Microsoft SQL Server.
You need to migrate the SAP environment to an Azure environment.
Solution: You migrate the SAP environment as is to Azure by using Azure Site Recovery.
Does this meet the goal?
A
Yes
B
No

A

Final Answer
B. No

Why is this correct?
The goal is to migrate a complex SAP environment to Azure, ensuring it operates effectively in the Azure environment. While Azure Site Recovery (ASR) is a powerful tool for disaster recovery and migration, it is not the appropriate solution for migrating an SAP environment like the one described, for the following reasons:

SAP NetWeaver 7.0 is not supported on Azure:
SAP NetWeaver 7.0 is an older release, and Microsoft Azure has specific supportability requirements for SAP workloads. According to SAP and Microsoft documentation, Azure supports newer versions of SAP NetWeaver (e.g., 7.4 or higher) for most workloads. SAP Note 1928533 (SAP Applications on Azure: Supported Products and Azure VM types) indicates that older versions like NetWeaver 7.0 may not be certified for Azure, particularly for production environments.
Migrating the environment “as is” using Azure Site Recovery would replicate the unsupported NetWeaver 7.0 system, which does not meet SAP’s certification requirements for Azure, potentially leading to operational issues or lack of support.
Unicode and Non-Unicode considerations:
The environment includes both Unicode and Non-Unicode systems. SAP recommends (and in many cases requires) Unicode for modern SAP systems, especially for cloud deployments. Migrating a Non-Unicode system to Azure without conversion may cause compatibility issues with newer SAP components or Azure services, as Non-Unicode is deprecated in newer SAP releases. Azure Site Recovery does not address this requirement, as it performs a lift-and-shift replication without transforming the system.
Azure Site Recovery is not optimized for SAP migrations:
Azure Site Recovery is designed for replicating VMs for disaster recovery or basic lift-and-shift migrations. It does not handle the specific complexities of SAP migrations, such as:
Database optimizations for Microsoft SQL Server on Azure (e.g., ensuring the correct VM types, storage configurations, or high-availability setups).
SAP-specific configurations, such as adjusting the SAP system to use Azure-certified VM types (e.g., M-series for SQL Server with SAP HANA or E-series for other workloads).
Handling ABAP and Java stack dependencies, which may require reconfiguration or upgrades during migration.
For SAP workloads, Microsoft recommends using tools and methodologies like SAP Migration Factory, Azure Database Migration Service, or manual migration processes with SAP tools (e.g., Software Provisioning Manager or Database Migration Option) to ensure compatibility and optimization.
Complex SAP environment requirements:
A “complex SAP environment” with ABAP and Java stacks implies multiple components (e.g., SAP Application Servers, Central Services, database servers). These require careful planning for Azure, including:
Selecting appropriate Azure VM types certified for SAP (e.g., E-series, M-series).
Configuring high availability (e.g., using Availability Zones or Sets).
Optimizing network latency (e.g., using Proximity Placement Groups).
Ensuring database compatibility (e.g., Microsoft SQL Server versions supported on Azure).
Azure Site Recovery does not address these SAP-specific configurations, as it focuses on replicating VMs without modifying the application or database layer to meet Azure’s best practices for SAP.
Lack of optimization for Azure:
Migrating “as is” does not leverage Azure’s capabilities, such as Azure Premium SSDs, Accelerated Networking, or SAP-certified configurations. SAP migrations to Azure typically involve rearchitecting or optimizing the environment to align with cloud best practices, which Azure Site Recovery cannot facilitate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

HOTSPOT -
You have an on-premises deployment of SAP HANA.
You plan to migrate the deployment to Azure.
You need to identify the following from the last six months:
✑ The number of active users
✑ The database performance
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area

From:
[Dropdown options]
- SAP GUI
- SAP Solution Manager
- A SAP Solution Manager work center

Run the:
[Dropdown options]
- SAP Quick Sizer
- Transaction ST06
- SAP EarlyWatch report

A

Correct Answers:
From: SAP Solution Manager
Run the: SAP EarlyWatch report

Why These Answers Are Correct for AZ-120:
From: SAP Solution Manager
Why Correct: SolMan is the central tool for managing and monitoring SAP systems, storing historical data over extended periods (e.g., six months). It’s the platform where EWA reports are generated and accessed, making it the logical source for this task.
Why Not SAP GUI? Limited to real-time or short-term data, not six-month history.
Why Not Work Center? Too specific; SolMan as a whole provides the reporting framework.
Exam Context: AZ-120 emphasizes SolMan for SAP landscape analysis and migration planning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You have an SAP environment on Azure that uses multiple subscriptions.
To meet GDPR requirements, you need to ensure that virtual machines are deployed only to the West Europe and North Europe Azure regions.
Which Azure components should you use?
A
Azure resource locks and the Compliance admin center
B
Azure resource groups and role-based access control (RBAC)
C
Azure management groups and Azure Policy
D
Azure Security Center and Azure Active Directory (Azure AD) groups

A

Final Answer
Which Azure components should you use?

C. Azure management groups and Azure Policy

Why “Azure management groups and Azure Policy” is Correct?
Policy Enforcement: Azure Policy directly addresses the requirement by restricting VM locations to West Europe and North Europe, using a deny effect for non-compliant regions.
Scalability: Management groups ensure the policy applies across all subscriptions in the SAP environment, critical for a multi-subscription setup.
GDPR Compliance: Limiting deployments to EU regions supports GDPR data residency, a common SAP-on-Azure scenario in AZ-120.
Practicality: This is a standard Azure governance approach, aligning with Microsoft’s recommendations for SAP workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

HOTSPOT -
You have an Azure Availability Set that is configured as shown in the following exhibit.
PS Azure:> get-azavailabilityset | Select Sku, PlatformFaultDomainCount, PlatformUpdateDomainCount, name, type | FL

Sku : Aligned
PlatformFaultDomainCount : 2
PlatformUpdateDomainCount : 4
Name : SAP-Databases-AS
Type : Microsoft.Compute/availabilitySets
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer Area

Virtual machines that share [answer choice] will be susceptible to a storage outage.

- aligned SKUs
- the same fault domain
- the same update domain

Virtual machines in the Azure Availability Set can support [answer choice].

- datacenter outages
- managed disks
- regional outages

A

Correct Answers:
Virtual machines that share the same fault domain will be susceptible to a storage outage.
Virtual machines in the Azure Availability Set can support managed disks.

Why These Answers Are Correct for AZ-120:
Fault Domain (Statement 1):
Why Correct: The AZ-120 exam tests understanding of high availability for SAP workloads. Fault domains isolate hardware failures (e.g., storage, power), and VMs sharing a fault domain (out of the 2 specified) are vulnerable to the same outage. Microsoft documentation for Availability Sets confirms this.
Exam Context: Critical for SAP HANA deployments, where storage resilience is key.
Managed Disks (Statement 2):
Why Correct: The “Aligned” SKU explicitly supports managed disks, a feature VMs in this Availability Set can leverage. Managed disks are standard for SAP HANA on Azure (e.g., Ultra Disk, Premium SSD), aligning with AZ-120’s focus on storage options.
Exam Context: AZ-120 emphasizes configuring VMs with managed disks for SAP workloads, especially in Availability Sets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You plan to deploy an SAP environment on Azure that will use Azure Availability Zones.
Which load balancing solution supports the deployment?
A
Azure Basic Load Balancer
B
Azure Standard Load Balancer
C
Azure Application Gateway v1 SKU

A

Correct Answer:
B. Azure Standard Load Balancer

Why This Answer Is Correct for AZ-120:
Availability Zones Requirement: The AZ-120 exam focuses on high availability and disaster recovery for SAP workloads on Azure. Availability Zones provide fault isolation across physically separate datacenters within a region. The Standard Load Balancer supports zone-redundant configurations, ensuring traffic is distributed to SAP VMs (e.g., HANA or application servers) across zones, maintaining uptime if a zone fails.
SAP Deployment Fit: For SAP environments (e.g., HANA scale-up or NetWeaver clusters), Microsoft recommends the Standard Load Balancer for zone-redundant load balancing, as per Azure SAP workload guides and SAP Note 2529073. It supports the TCP/UDP protocols used by SAP components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have an Azure subscription.
Your company has an SAP environment that runs on SUSE Linux Enterprise Server (SLES) servers and SAP HANA. The environment has a primary site and a disaster recovery site. Disaster recovery is based on SAP HANA system replication. The SAP ERP environment is 4 TB and has a projected growth of 5% per month.
The company has an uptime Service Level Agreement (SLA) of 99.99%, a maximum recovery time objective (RTO) of four hours, and a recovery point objective
(RPO) of 10 minutes.
You plan to migrate to Azure.
You need to design an SAP landscape for the company.
Which options meet the company’s requirements?
A.
✑ Azure virtual machines and SLES for SAP application servers
✑ SAP HANA on Azure (Large Instances) that uses SAP HANA system replication for high availability and disaster recovery
B.
✑ ASCS/ERS and SLES clustering that uses the Pacemaker fence agent
✑ SAP application servers deployed to an Azure Availability Zone
✑ SAP HANA on Azure (Large Instances) that uses SAP HANA system replication for database high availability and disaster recovery
C.
✑ SAP application instances deployed to an Azure Availability Set
✑ SAP HANA on Azure (Large Instances) that uses SAP HANA system replication for database high availability and disaster recovery
D.
✑ ASCS/ERS and SLES clustering that uses the Azure fence agent
SAP application servers deployed to an Azure Availability Set

✑ SAP HANA on Azure (Large Instances) that uses SAP HANA system replication for database high availability and disaster recovery

A

Final Answer
Which options meet the company’s requirements?

B.
✑ ASCS/ERS and SLES clustering that uses the Pacemaker fence agent
✑ SAP application servers deployed to an Azure Availability Zone
✑ SAP HANA on Azure (Large Instances) that uses SAP HANA system replication for database high availability and disaster recovery

Why “B” is Correct?
99.99% Uptime:
ASCS/ERS clustering with Pacemaker (~99.99%+ with fast failover).
App servers in Availability Zones (99.99% SLA, better than Sets’ 99.95%).
HANA system replication (sync HA = near-zero downtime).
RTO ≤ 4 hours: Pacemaker (~minutes) and HANA failover (~minutes to hours) fit.
RPO ≤ 10 min: HANA sync (0 min) for HA, async (~minutes) for DR.
SAP Best Practices: Matches Azure’s SAP HA/DR architecture (Pacemaker for SLES, Zones for app servers, HANA Large Instances for DB).
AZ-120 Relevance: Reflects a complete SAP landscape design with HA (within region) and DR (across regions).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

DRAG DROP -
Your on-premises network contains an Active Directory domain.
You have an SAP environment on Azure that runs on SUSE Linux Enterprise Server (SLES) servers.
You configure the SLES servers to use domain controllers as their NTP servers and their DNS servers.
You need to join the SLES servers to the Active Directory domain.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Actions

  • Add realm details to /etc/krb5.conf and /etc/samba/smb.conf
  • Shut down the following services: smbd, nmbd, and winbindd
  • Run net ads join -U administrator
  • Run net rpc join -U administrator
  • Install the samba-winbind package

Answer Area

[ ⬅ ➡ ]
[ ⬆ ⬇ ]

A

Answer Area:
Install the samba-winbind package
Add realm details to /etc/krb5.conf and /etc/samba/smb.conf
Run net ads join -U administrator
Why This Sequence Is Correct for AZ-120:
Logical Flow:
Install: Ensures tools are available (prerequisite).
Configure: Sets up Kerberos and Samba for AD integration.
Join: Executes the domain join using the configured settings.
SLES-AD Integration: Standard process on SLES involves installing Samba/Winbind, configuring Kerberos and Samba, then joining via net ads join (preferred over RPC for modern AD with Kerberos).
Why Not the Others?
Shut down services: Optional and not typically required before joining; services may not even be running yet if freshly installed.
Run net rpc join: Older method, less secure (NTLM vs. Kerberos), and not the default for SLES-AD integration.
Exam Context: AZ-120 tests hybrid scenarios, and joining Linux VMs to AD is common for SAP environments on Azure. Microsoft and SUSE documentation align with this sequence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You deploy SAP HANA on Azure (Large Instances).
You need to back up the SAP HANA database to Azure.
Solution: You configure DB13 to back up directly to a local disk.
Does this meet the goal?
A
Yes
B
No

A

Final Answer
Does this meet the goal?

B. No

Why “No” is Correct?
Goal Mismatch: The goal is to back up the HANA database to Azure (implying cloud storage), but the solution only uses local disk storage on the Large Instance, which isn’t the same as Azure Blob Storage or a resilient, off-instance solution.
Resilience Gap: Local disk backups don’t provide the durability or DR benefits expected in Azure, failing to leverage Azure’s backup infrastructure.
AZ-120 Context: The exam often tests your understanding of Azure-native backup integration for SAP HANA (e.g., Blob Storage with snapshots), not just local storage options. This solution falls short of that standard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to migrate an SAP HANA instance to Azure. You need to gather CPU metrics from the last 24 hours from the instance. Solution: You use Monitoring from the SAP HANA Cockpit. Does this meet the goal? A Yes B No
Final Answer A. Yes Why the Solution Meets the Goal SAP HANA Cockpit Overview: SAP HANA Cockpit is a web-based administration and monitoring tool for SAP HANA databases. It provides a centralized interface to monitor system performance, including CPU usage, memory, disk I/O, and more, for both real-time and historical data. Monitoring Capabilities: The Monitoring section (or tiles like “Performance Monitor” or “System Monitor”) in SAP HANA Cockpit allows administrators to view historical performance metrics, such as CPU utilization, over a specified time range. By default, SAP HANA collects and stores short-term performance data (e.g., in the _SYS_STATISTICS schema) for at least 24 hours, depending on configuration. You can access CPU metrics via the “Load Monitor” or “Resource Utilization” views, selecting a 24-hour timeframe. On-Premises Applicability: Since the SAP HANA instance is still on-premises (pre-migration), SAP HANA Cockpit is already available as part of the HANA installation (assuming proper setup). It doesn’t rely on Azure-specific tools, making it a valid solution for the current environment. Meeting the Goal: The Cockpit’s monitoring features directly provide CPU metrics for the last 24 hours, fulfilling the stated requirement without additional tools or configuration beyond what’s standard for SAP HANA.
26
HOTSPOT - You have SAP ERP on Azure. For SAP high availability, you plan to deploy ASCS/ERS instances across Azure Availability Zones and to use failover clusters. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements Yes No -------------------------------------------------------- ---- ---- To create a failover solution, you can use an Azure Basic Load Balancer for Azure virtual machines deployed across the Azure Availability Zones. ( ) ( ) You can deploy Azure Availability Sets within an Azure Availability Zone. ( ) ( ) The solution must use Azure managed disks. ( ) ( )
Correct Answers: To create a failover solution, you can use an Azure Basic Load Balancer for Azure virtual machines deployed across the Azure Availability Zones: No You can deploy Azure Availability Sets within an Azure Availability Zone: Yes The solution must use Azure managed disks: Yes Why These Answers Are Correct for AZ-120: Basic Load Balancer (No): AZ-120 tests knowledge of HA networking for SAP. The Basic Load Balancer lacks zone support, while the Standard Load Balancer is required for Availability Zones (SAP HA guide). Availability Sets in a Zone (Yes): Reflects understanding of Azure HA options. Availability Sets can exist within a zone, though the scenario prioritizes zones for ASCS/ERS HA. Managed Disks (Yes): AZ-120 emphasizes modern Azure infrastructure for SAP. Managed disks are a requirement for VMs in AZs and clusters, ensuring reliability and performance (SAP Note 2529073).
27
HOTSPOT - You are deploying an SAP environment across Azure Availability Zones. The environment has the following components: ✑ ASCS/ERS instances that use a failover cluster ✑ SAP application servers across the Azure Availability Zones ✑ Database high availability by using a native database solution For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements Yes No -------------------------------------------------------- ---- ---- Network latency is a limiting factor when deploying DBMS instances that use synchronous replication across the Azure Availability Zones. ( ) ( ) The performance of SAP systems can be validated by using ABAPMeter. ( ) ( ) To help identify the best Azure Availability Zones for deploying the SAP components, you can use NIPING to verify network latency between the zones. ( ) ( )
Final Answer Statements: Network latency is a limiting factor when deploying DBMS instances that use synchronous replication across the Azure Availability Zones: Yes The performance of SAP systems can be validated by using ABAPMeter: Yes To help identify the best Azure Availability Zones for deploying the SAP components, you can use NIPING to verify network latency between the zones: Yes Why Correct? Statement 1 (Yes): Latency’s impact on synchronous replication is a well-documented constraint in Azure HA designs for SAP, aligning with AZ-120’s focus on performance optimization. Statement 2 (Yes): ABAPMeter’s role in validating ABAP performance is standard SAP practice, applicable in Azure, and relevant to the exam’s SAP tooling knowledge. Statement 3 (Yes): NIPING’s utility for latency testing across zones is a practical planning step for SAP deployments, a frequent AZ-120 topic.
27
HOTSPOT - You are planning the Azure network infrastructure for an SAP environment. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements Yes No ------------------------------------------------------------------------------------------------- You can segregate the SAP application layer and the DBMS layer into different virtual networks that are peered by using Global Vnet peering. ( ) ( ) You can segregate the SAP application layer and the DBMS layer into different subnets in the same virtual network. ( ) ( ) If you segregate the SAP application layer and the DBMS layer into different peered virtual networks, you will incur costs for the data transferred between the virtual networks. ( ) ( )
Final Answer You can segregate the SAP application layer and the DBMS layer into different virtual networks that are peered by using Global VNet peering: Yes You can segregate the SAP application layer and the DBMS layer into different subnets in the same virtual network: Yes If you segregate the SAP application layer and the DBMS layer into different peered virtual networks, you will incur costs for the data transferred between the virtual networks: Yes Why This Selection Is the Closest to Correct for AZ-120 For the AZ-120 exam, designing Azure network infrastructure for SAP workloads involves balancing isolation, performance, and cost. The answers reflect: Flexibility: Both multi-VNet (peered) and single-VNet (subnets) designs are valid for SAP, addressing the first two statements. Cost Awareness: The third statement correctly identifies the cost impact of VNet peering, a key consideration in SAP deployments where application-DBMS traffic is high. These answers align with Microsoft’s SAP on Azure architecture guidelines: Yes (1): Global VNet peering supports cross-region or isolated VNet designs. Yes (2): Single VNet with subnets is a cost-effective, low-latency option. Yes (3): Peered VNets incur data transfer costs, relevant for SAP’s high-bandwidth needs.
28
You deploy an SAP environment on Azure. You need to validate the load distribution to the application servers. What should you use? A SAPControl B SAP Solution Manager C Azure Monitor D SAP Web Dispatcher
Correct Answer: D. SAP Web Dispatcher Why This Answer Is Correct for AZ-120: Purpose Fit: The AZ-120 exam focuses on administering SAP workloads on Azure, including validating load distribution to application servers. SAP Web Dispatcher is the SAP-provided tool for distributing requests to dialog instances (e.g., in NetWeaver), and it offers real-time insights into load balancing via its monitoring interface or logs (e.g., dev_webdisp). Validation Capability: Unlike monitoring tools, Web Dispatcher is the load balancer itself, allowing direct validation of distribution (e.g., requests per server, server load). This aligns with the question’s intent to “validate” rather than just monitor. SAP on Azure Context: Microsoft’s SAP on Azure architecture often uses SAP Web Dispatcher for HTTP/HTTPS load balancing (e.g., for SAP GUI or Fiori), complementing Azure Load Balancer for non-web traffic. It’s the most direct way to check SAP-level distribution. Why Not the Others? SAPControl: Instance management, not load distribution analysis. SAP Solution Manager: Long-term monitoring, not real-time validation. Azure Monitor: Useful for VM or infrastructure metrics, but less SAP-specific and requires additional setup (e.g., AMS) to validate application-level distribution. D. SAP Web Dispatcher is the correct and exam-relevant choice, as it’s the primary SAP tool for managing and validating load distribution to application servers, per SAP and Azure best practices (SAP Note 2529073).
29
HOTSPOT - You plan to deploy a highly available ASCS instance to SUSE Linux Enterprise Server (SLES) virtual machines in Azure. You are configuring an internal Azure Standard Load Balancer for the ASCS instance. How should you configure the internal Standard Load Balancer? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Session persistence: [ ▼ ] Client IP Client IP and Protocol None Floating IP (direct server return): [ ▼ ] Disabled Enabled
Final Answer Session persistence: None Floating IP (direct server return): Enabled Why Correct? Session Persistence (None): ASCS HA uses a single active node, and the cluster manages failover. Stickiness isn’t needed, as all traffic targets the VIP, matching SAP HA designs in Azure. Floating IP (Enabled): Essential for Pacemaker clustering, ensuring the ASCS VIP moves between nodes seamlessly, a critical requirement for HA in SAP-on-Azure deployments. AZ-120 Alignment: Reflects standard configurations from Microsoft’s SAP HA documentation (e.g., ILB for ASCS with Pacemaker on SLES), a frequent exam topic.
30
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You deploy SAP HANA on Azure (Large Instances). You need to back up the SAP HANA database to Azure. Solution: You create a Recovery Services vault and a backup policy. Does this meet the goal? A Yes B No
Final Answer B. No Why the Solution Does Not Meet the Goal Azure Large Instances Overview: SAP HANA on Azure (Large Instances) is a certified bare-metal solution where HANA runs directly on physical hardware, not as a VM in the Azure IaaS model. HLI servers are connected to Azure VNets but are managed differently from typical Azure VMs, with specific storage and backup mechanisms. Azure Backup Limitations: A Recovery Services vault is an Azure resource used by Azure Backup to store backup data and manage backup policies. A backup policy defines the schedule and retention for backups. Azure Backup supports backing up SAP HANA databases running on Azure VMs (via the Azure Backup agent and HANA backup scripts), but it does not natively support SAP HANA on Azure Large Instances. The backup process for HLI relies on storage-level snapshots and integration with HANA-specific tools, not the standard Azure Backup service. HLI Backup Process: For SAP HANA on Azure Large Instances, Microsoft provides a backup solution based on storage snapshots (e.g., using Azure NetApp Files or dedicated storage volumes). These snapshots are managed through the Azure portal or CLI, coordinated with SAP HANA’s backup catalog (e.g., via hdbsql or HANA Cockpit). The process involves: Creating a snapshot of the HANA data volume. Registering the snapshot in the HANA backup catalog. Storing snapshots in Azure Blob Storage (manually or via scripts). This is distinct from Azure Backup’s Recovery Services vault approach, which is VM-centric and not designed for HLI’s bare-metal architecture. Mismatch: Creating a Recovery Services vault and backup policy is a valid step for backing up HANA on Azure VMs, but it doesn’t apply to HLI. The solution doesn’t leverage the HLI-specific backup mechanism, failing to meet the goal for this deployment type.
31
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements Yes No SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/data volume. ( ) ( ) SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/log volume. ( ) ( ) To enable Write Accelerator, you must use Azure Premium managed disks. ( ) ( )
Correct Answers: SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/data volume: No SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/log volume: Yes To enable Write Accelerator, you must use Azure Premium managed disks: Yes Statement 1: SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/data volume. Analysis: SAP HANA Certification: For M-Series VMs (memory-optimized for HANA), SAP certification ensures performance KPIs (e.g., latency, IOPS) are met. The /hana/data volume stores HANA data files, requiring high IOPS and low latency, but submillisecond write latency is not mandatory. Write Accelerator: This feature caches write operations to reduce latency, primarily for transaction logs (redo logs) in /hana/log. It’s not required for /hana/data, which can use Premium SSD or Ultra Disk without Write Accelerator for certification. Azure Guidance: Microsoft documentation specifies Write Accelerator for /hana/log in production HANA setups on M-Series, not /hana/data. Answer: No Why Correct: Certification doesn’t mandate Write Accelerator for /hana/data. It’s optional and typically not used there, as Ultra Disk or Premium SSD suffice. Statement 2: SAP HANA certification for M-Series Azure virtual machines requires that Write Accelerator be enabled on the /hana/log volume. Analysis: SAP HANA Certification: The /hana/log volume holds transaction logs, critical for HANA’s consistency and recovery. SAP requires submillisecond write latency for logs in production (SAP Note 2529073). Write Accelerator: On M-Series VMs, Write Accelerator (exclusive to Premium SSD) ensures submillisecond latency for writes to /hana/log. Microsoft and SAP mandate it for production HANA certification on M-Series when using Premium SSD (not Ultra Disk, which meets latency natively). Condition: If Ultra Disk is used for /hana/log, Write Accelerator isn’t needed (it’s not supported on Ultra Disk), but the question implies a Premium SSD context, where it’s required. Answer: Yes Why Correct: For M-Series with Premium SSD, Write Accelerator is required for /hana/log to meet SAP HANA certification KPIs in production, per Azure SAP guides. Statement 3: To enable Write Accelerator, you must use Azure Premium managed disks. Analysis: Write Accelerator: A feature available only for M-Series and Mv2-Series VMs, it enhances write performance by caching operations. It’s supported exclusively on Azure Premium managed disks (SSD). Disk Types: Not available for Standard HDD, Standard SSD, or Ultra Disk. Premium SSD is the only compatible type. Azure Docs: Microsoft confirms Write Accelerator requires Premium managed disks, a key requirement for SAP HANA log volumes on M-Series. Answer: Yes Why Correct: Write Accelerator is a Premium SSD-specific feature, making this a technical necessity for enabling it.
32
You have an Azure subscription. You deploy Active Directory domain controllers to Azure virtual machines. You plan to deploy Azure for SAP workloads. You plan to segregate the domain controllers from the SAP systems by using different virtual networks. You need to recommend a solution to connect the virtual networks. The solution must minimize costs. What should you recommend? A a site-to-site VPN B virtual network peering C user-defined routing D ExpressRoute
Final Answer What should you recommend? B. Virtual network peering Why “Virtual network peering” is Correct? Cost Efficiency: Free for same-region peering (most likely here, as no cross-region requirement is specified), with no gateway or hardware costs, meeting the “minimize costs” criterion. Connectivity: Enables secure, low-latency communication between VNets, allowing SAP systems to authenticate with domain controllers (e.g., via LDAP, Kerberos). SAP on Azure Fit: A standard approach for SAP deployments in Azure, where segregation into multiple VNets (e.g., for AD, SAP app servers, DB) is common, and peering ensures integration. AZ-120 Relevance: The exam emphasizes cost-effective, Azure-native solutions for SAP networking, and VNet peering is the recommended method for intra-Azure VNet connectivity.
33
You deploy an SAP environment on Azure. Your company has a Service Level Agreement (SLA) of 99.99% for SAP. You implement Azure Availability Zones that have the following components: ✑ Redundant SAP application servers ✑ ASCS/ERS instances that use a failover cluster Database high availability that has a primary instance and a secondary instance You need to validate the high availability configuration of the ASCS/ERS cluster. What should you use? A SAP Web Dispatcher B Azure Traffic Manager C SAPControl D SAP Solution Manager
Final Answer C. SAPControl Why SAPControl Is Correct SAPControl Overview: SAPControl is a command-line tool (and web service) provided by SAP to manage and monitor SAP instances, including starting, stopping, and checking the status of components like ASCS and ERS. It interacts with the SAP host agent (sapstartsrv) on each server. Validation Capabilities: Cluster Status: SAPControl can check the status of ASCS and ERS instances (e.g., using commands like sapcontrol -nr -function GetProcessList to verify running processes or GetSystemInstanceList to see instance distribution). Failover Testing: You can simulate a failover (e.g., by stopping the ASCS instance with sapcontrol -nr -function Stop) and confirm that the cluster moves the enqueue service to the ERS instance or another node, validating HA functionality. Health Check: It provides insights into instance availability and cluster role assignments, ensuring the failover cluster is correctly configured across Availability Zones. Azure Context: For SAP on Azure, SAPControl is used to verify that the ASCS/ERS cluster (e.g., integrated with Azure Load Balancer and Availability Zones) operates as expected, meeting the HA requirements for a 99.99% SLA. Practicality: It’s a native SAP tool, requiring no additional setup beyond what’s already present in the SAP deployment, making it ideal for validation.
34
DRAG DROP - You are validating an SAP HANA on Azure (Large Instances) deployment. You need to ensure that sapconf is installed and the kernel parameters are set appropriately for the active profile. How should you complete the commands? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Values sap-ase sap-bobj sapconf sap-hana sap-netweaver saptune tuned Answer Area osprompt> more /etc/sysconfig/ [ Value ] osprompt> more /usr/lib/tuned/ [ Value ] /tuned.conf
35
You are deploying an SAP environment on Azure that will use an SAP HANA database server. You provision an Azure virtual machine for SAP HANA by using the M64s virtual machine SKU. You need to set the swap space by using the Microsoft Azure Linux Agent (waagent) configuration file. Which two settings should you configure? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A ResourceDisk.EnableSwapEncryption=n B AutoUpdate.Enabled=n C ResourceDisk.SwapSizeMB=229376 D ResourceDisk.EnableSwap=y
Correct Answers: C. ResourceDisk.SwapSizeMB=229376 D. ResourceDisk.EnableSwap=y Why These Answers Are Correct for AZ-120: ResourceDisk.EnableSwap=y Why Correct: This is the first required setting to enable swap space on the Azure VM. Without enabling swap, the size setting is irrelevant. Azure Linux Agent uses this to create a swap file or partition on the resource disk. SAP Context: SAP HANA requires swap space, and enabling it via waagent is a standard step for Linux VMs on Azure (SAP Note 1999997, Azure SAP guide). Exam Relevance: AZ-120 tests VM configuration for SAP HANA, including enabling swap. ResourceDisk.SwapSizeMB=229376 Why Correct: This specifies the swap size (224 GB), completing the configuration. While SAP recommends 2 GB for HANA systems with >256 GB RAM, Azure deployments sometimes use larger sizes for flexibility (e.g., based on RAM or workload). The question provides this value, suggesting it’s the intended setting. SAP Context: Swap size ensures HANA stability under memory pressure, and waagent allows precise control. Exam Relevance: AZ-120 includes configuring storage and memory settings for SAP HANA VMs.
36
HOTSPOT - You have the following Azure Resource Manager template. { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {}, "resources": [ { "apiVersion": "2016-01-01", "type": "Microsoft.Storage/storageAccounts", "name": "[concat(copyIndex(), 'storage', uniqueString(resourceGroup().id))]", "location": "[resourceGroup().location]", "sku": { "name": "Premium_LRS" }, "kind": "Storage", "properties": {}, "copy": { "name": "storagecopy", "count": 6, "mode": "Serial", "batchSize": 1 } } ] } For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements -------------------------------------- Yes [ ] No [ ] Six storage accounts will be created. [ ] [ ] The storage accounts will be created in parallel. [ ] [ ] The storage accounts will be replicated to multiple regions. [ ] [ ]
Final Answer Statements: Six storage accounts will be created: Yes The storage accounts will be created in parallel: No The storage accounts will be replicated to multiple regions: No Six storage accounts will be created: Yes Why True: The copy section specifies "count": 6, meaning the template will iterate 6 times to create storage accounts. Each iteration uses copyIndex() (0 to 5) in the name, resulting in 6 unique storage accounts (e.g., 0storage, 1storage, ..., 5storage). Details: The copy feature in ARM templates is used to deploy multiple instances of a resource, and here it explicitly sets 6 instances. AZ-120 Relevance: Understanding resource iteration in ARM templates is key for SAP deployments (e.g., provisioning multiple storage accounts for SAP HANA or app servers). The storage accounts will be created in parallel: No Why False: The copy section includes "mode": "Serial", which forces the storage accounts to be created sequentially (one after the other), not in parallel. "batchSize": 1 further reinforces this, limiting deployment to one account at a time. In contrast, "mode": "Parallel" (the default if unspecified) would create them simultaneously, but that’s not the case here. Details: Serial mode ensures each account is fully deployed before the next begins, useful for dependencies or naming constraints, though it slows deployment. AZ-120 Relevance: The exam tests ARM template mechanics, including deployment modes, which impact SAP environment provisioning timelines. The storage accounts will be replicated to multiple regions: No Why False: The sku specifies "name": "Premium_LRS", where LRS stands for Locally Redundant Storage. LRS replicates data three times within a single data center in one region (e.g., West Europe), not across multiple regions. Multi-region replication requires Geo-Redundant Storage (GRS) or Zone-Redundant Storage (ZRS), neither of which is configured here. Details: Premium_LRS uses SSDs for high performance (suitable for SAP workloads), but it’s limited to local redundancy within the resource group’s region ([resourceGroup().location]). AZ-120 Relevance: Understanding storage replication options is critical for SAP HA/DR designs, and this tests your grasp of LRS vs. GRS/ZRS.
37
You plan to deploy an SAP environment on Azure. You plan to store all SAP connection strings securely in Azure Key Vault without storing credentials on the Azure virtual machines that host SAP. What should you configure to allow the virtual machines to access the key vault? A Azure Active Directory (Azure AD) Privilege Identity Manager (PIM) B role-based access control (RBAC) C a Managed Service Identity (MSI) D the Custom Script Extension
Final Answer C. a Managed Service Identity (MSI) Why MSI Is Correct Managed Service Identity (MSI) (now called Managed Identity in Azure): Overview: MSI is an Azure AD feature that provides an automatically managed identity for Azure resources (e.g., VMs). It eliminates the need to store credentials on the VM by assigning an identity that Azure manages. How It Works: You enable a system-assigned managed identity on the VM (or use a user-assigned managed identity). The VM’s identity is registered in Azure AD, and Azure provides a token-based authentication mechanism. The VM uses this identity to request an access token from Azure AD, which it then presents to Key Vault to retrieve secrets (e.g., SAP connection strings). Security: No credentials are stored on the VM; the identity is tied to the VM’s lifecycle (for system-assigned) or managed separately (for user-assigned), meeting the requirement. Access Control: MSI must be paired with RBAC permissions (e.g., “Key Vault Secrets User” role) to allow the VM to read secrets, but MSI itself is the authentication mechanism.
38
HOTSPOT - You deploy SAP HANA by using SAP HANA on Azure (Large Instances). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area Statements -------------------------------------------------------------------------------------------- Yes [ ] No [ ] You can use SAP HANA Studio to monitor CPU, memory, network, and storage usage for SAP HANA on Azure (Large Instances). [ ] [ ] Azure Extension for SAP is required to monitor the performance of SAP HANA on Azure (Large Instances). [ ] [ ] You can use the SAP HANA HW Configuration Check Tool (HWCCT) to monitor SAP HANA running on SAP HANA on Azure (Large Instances). [ ] [ ]
Correct Answers: You can use SAP HANA Studio to monitor CPU, memory, network, and storage usage for SAP HANA on Azure (Large Instances): Yes Azure Extension for SAP is required to monitor the performance of SAP HANA on Azure (Large Instances): No You can use the SAP HANA HW Configuration Check Tool (HWCCT) to monitor SAP HANA running on SAP HANA on Azure (Large Instances): No Statement 1: You can use SAP HANA Studio to monitor CPU, memory, network, and storage usage for SAP HANA on Azure (Large Instances). Analysis: SAP HANA Studio: A graphical administration tool for SAP HANA, offering monitoring capabilities like CPU, memory, disk, and network usage via the Administration Console (e.g., “Performance” and “System Information” tabs). Large Instances Context: SAP HANA on Azure (Large Instances) is a bare-metal deployment, but it runs HANA software identically to other platforms. HANA Studio connects to the HANA database regardless of the underlying infrastructure, providing OS-level metrics if the HANA system is configured to expose them (e.g., via hdbsql or system views). Azure Nuance: While Azure-specific metrics (e.g., host-level details) might require additional tools, HANA Studio can monitor HANA’s perspective of CPU, memory, network, and storage usage. Answer: Yes Why Correct: HANA Studio is a standard tool for monitoring HANA performance metrics, applicable to Large Instances, as per SAP documentation (SAP Note 1999997). Statement 2: Azure Extension for SAP is required to monitor the performance of SAP HANA on Azure (Large Instances). Analysis: Azure Extension for SAP: The Azure Enhanced Monitoring (AEM) Extension is a VM agent that collects OS and application metrics (e.g., CPU, memory) and feeds them into Azure Monitor or SAP systems. It’s typically used on Azure VMs. Large Instances Context: SAP HANA on Azure (Large Instances) is a bare-metal offering managed by Microsoft, not a VM. The AEM Extension is designed for Azure VMs (e.g., M-Series), not Large Instances. Microsoft provides built-in monitoring for Large Instances via the Azure portal and SAP tools, without requiring this extension. Monitoring: Performance monitoring for Large Instances uses SAP tools (e.g., HANA Studio, Cockpit) and Azure-provided telemetry, not the AEM Extension. Answer: No Why Correct: The AEM Extension isn’t applicable or required for Large Instances, per Microsoft’s SAP HANA on Azure (Large Instances) documentation. Statement 3: You can use the SAP HANA HW Configuration Check Tool (HWCCT) to monitor SAP HANA running on SAP HANA on Azure (Large Instances). Analysis: HWCCT: The SAP HANA Hardware Configuration Check Tool validates hardware performance (e.g., CPU, memory, storage IOPS) against SAP HANA certification requirements. It’s a benchmarking tool, not a real-time monitoring solution. Large Instances Context: HWCCT is used during deployment or validation to ensure the bare-metal server meets HANA standards, not for ongoing monitoring. For Large Instances, Microsoft pre-certifies the hardware, but customers can run HWCCT to verify. Monitoring Misalignment: “Monitor” implies continuous observation (e.g., CPU usage over time), which HWCCT doesn’t do—it’s a one-time test. Answer: No Why Correct: HWCCT is for validation, not monitoring, per SAP Note 1943937 and Azure Large Instances guidance.
39
You plan to deploy SAP application servers that run Windows Server 2016. You need to use PowerShell Desired State Configuration (DSC) to configure the SAP application server once the servers are deployed. Which Azure virtual machine extension should you install on the servers? A. the Azure DSC VM Extension B. the Azure virtual machine extension C. the Azure Chef extension D. the Azure Enhanced Monitoring Extension for SAP
Final Answer Which Azure virtual machine extension should you install on the servers? A. the Azure DSC VM Extension Why “Azure DSC VM Extension” is Correct? Direct DSC Support: It’s purpose-built to execute PowerShell DSC configurations on Azure VMs, matching the requirement exactly. SAP Applicability: Enables automated, repeatable configuration of SAP app servers on Windows Server 2016 (e.g., installing SAP software, setting environment variables), a practical need in SAP-on-Azure deployments. Cost and Simplicity: Native to Azure, free to use (beyond VM costs), and integrates seamlessly with PowerShell, a common admin tool. AZ-120 Relevance: Reflects the exam’s focus on Azure extensions for SAP management, with DSC being a key configuration tool for Windows-based SAP systems.
40
You deploy an SAP environment on Azure by following the SAP workload on Azure planning and deployment checklist. You need to verify whether Azure Diagnostics is enabled. Which cmdlet should you run? A Get-AzureVMAvailableExtension B Get-AzVmDiagnosticsExtension C Test-AzDeployment D Test-VMConfigForSAP
Final Answer B. Get-AzVmDiagnosticsExtension Why Get-AzVmDiagnosticsExtension Is Correct Functionality: Get-AzVmDiagnosticsExtension is a PowerShell cmdlet in the Az module (Azure Resource Manager model) that retrieves the configuration and status of the Azure Diagnostics Extension on a specified Azure VM. It returns details such as whether the extension is installed, enabled, and properly configured (e.g., with a storage account or metrics sink). Usage: You run it with parameters like -ResourceGroupName and -VMName to target the SAP VMs, e.g.: powershell Get-AzVmDiagnosticsExtension -ResourceGroupName "SAPRG" -VMName "SAPVM1" This confirms if the extension is active, meeting the goal of verification. SAP on Azure: The checklist emphasizes enabling diagnostics for monitoring, and this cmdlet directly validates that step post-deployment. Modern Azure: The Az module is the current standard for Azure PowerShell (replacing AzureRM), aligning with contemporary SAP deployments on Azure.
41
DRAG DROP - You need to connect SAP HANA on Azure (Large Instances) to an Azure Log Analytics workspace. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Actions -------------------------------------------------------------------------------------------- - Install the Azure Enhanced Monitoring Extension for SAP on SAP HANA on Azure (Large Instances). - On the gateway, run Import-Module OMGSGateway and Add-OMSGatewayAllowedHost. - Configure a Log Analytics gateway on the virtual network that has connectivity to the SAP HANA on Azure (Large Instances) instance. - Install the Log Analytics agent on the SAP HANA on Azure (Large Instances) instance. - Configure a Log Analytics gateway server as a proxy for the Log Analytics agent on SAP HANA on Azure (Large Instances). Answer Area -------------------------------------------------------------------------------------------- [ < ] [ > ] [ ^ ] [ v ]
Answer Area: Configure a Log Analytics gateway on the virtual network that has connectivity to the SAP HANA on Azure (Large Instances) instance. Install the Log Analytics agent on the SAP HANA on Azure (Large Instances) instance. Configure a Log Analytics gateway server as a proxy for the Log Analytics agent on SAP HANA on Azure (Large Instances). On the gateway, run Import-Module OMGSGateway and Add-OMSGatewayAllowedHost. Correct Sequence: Configure a Log Analytics gateway on the virtual network that has connectivity to the SAP HANA on Azure (Large Instances) instance. Why First: The gateway must be established first in the Azure VNet (connected via ExpressRoute) to enable communication between the isolated Large Instances and Log Analytics. This involves deploying a VM with the Log Analytics Gateway role. Details: Requires VNet connectivity to the Large Instances stamp, typically via a subnet with ExpressRoute. Install the Log Analytics agent on the SAP HANA on Azure (Large Instances) instance. Why Second: The agent (MMA) must be installed on the HANA server to collect logs and metrics. This step comes after the gateway is in place, as the agent will need to communicate through it. Details: Download and install the Linux MMA on the SLES/RHEL OS of the Large Instances. Configure a Log Analytics gateway server as a proxy for the Log Analytics agent on SAP HANA on Azure (Large Instances). Why Third: After installing the agent, configure it to use the gateway as its proxy. This involves setting the gateway’s IP/FQDN in the agent’s configuration (e.g., /etc/opt/microsoft/omsagent/proxy.conf). Details: Ensures the agent sends data to the gateway instead of directly to Log Analytics. On the gateway, run Import-Module OMGSGateway and Add-OMSGatewayAllowedHost. Why Fourth: Finalize the gateway setup by running PowerShell commands on the gateway VM to allow the Large Instances host, ensuring secure data flow to Log Analytics. Details: Imports the OMS Gateway module and adds the HANA server’s hostname/IP as an allowed host.
42
DRAG DROP - You plan to deploy multiple SAP HANA virtual machines to Azure by using an Azure Resource Manager template. How should you configure Accelerated Networking and Write Accelerator in the template? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Values -------- "false" "none" "true" Answer Area ------------ { "apiVersion": "2017-06-01", "type": "Microsoft.Network/networkInterfaces", "name": "[concat(parameters('vmName'), '-static')]", "location": "[resourceGroup().location]", "properties": { "enableAcceleratedNetworking": Value, "ipConfigurations": [ { "name": "ipconfig1", "properties": { "privateIPAllocationMethod": "Static", "privateIPAddress": "[parameters('StaticIP')]", "subnet": { "id": "[variables('subnetRef')]" } } } ] } }, { "apiVersion": "2014-12-01", "type": "Microsoft.Compute/virtualMachines", "name": "[parameters('vmName')]", "location": "[resourceGroup().location]", "dependsOn": [ "[resourceId('Microsoft.Compute/availabilitySets',parameters('AvailSetName'))]" ], "properties": { "availabilitySet": { "id": "[resourceId('Microsoft.Compute/availabilitySets',parameters('AvailSetName'))]" }, "hardwareProfile": { "vmSize": "Standard_M64ms" }, "osProfile": { "computerName": "[parameters('vmName')]", "adminUsername": "[parameters('vmUserName')]", "adminPassword": "[parameters('vmPassword')]" }, "storageProfile": { "imageReference": { "publisher": "RedHat", "offer": "RHEL-SAP-HANA", "sku": "7.2", "version": "latest" }, "osDisk": { "createOption": "FromImage" }, "dataDisks": [ { "lun": 7, "name": "[concat(parameters('vmName'), '-log')]", "createOption": "Empty", "writeAcceleratorEnabled": Value, "diskSizeGB": 2048, "managedDisk": { "storageAccountType": "Premium_LRS" } } ] }, "networkProfile": { "networkInterfaces": [ { "id": "[resourceId('Microsoft.Network/networkInterfaces',concat(parameters('vmName'), '-static'))]" } ] } } }
Final Answer Accelerated Networking: "enableAcceleratedNetworking": "true" Write Accelerator: "writeAcceleratorEnabled": "true" Accelerated Networking Location in Template: Microsoft.Network/networkInterfaces → "enableAcceleratedNetworking" Options: "true", "false", "none" Why "true" is Correct: SAP HANA Requirement: SAP HANA workloads on Azure benefit significantly from low-latency networking, especially for communication between app servers and the database. Accelerated Networking is recommended by Microsoft for HANA VMs to minimize network latency and jitter. VM Support: The Standard_M64ms (M-series) supports Accelerated Networking, as do most HANA-certified VMs. OS Support: RHEL for SAP HANA (specified in imageReference) is compatible with Accelerated Networking. AZ-120 Relevance: Enabling Accelerated Networking is a best practice for SAP HANA deployments in Azure, often tested in the exam for performance optimization. Write Accelerator Location in Template: Microsoft.Compute/virtualMachines → storageProfile → dataDisks → "writeAcceleratorEnabled" Options: "true", "false", "none" Why "true" is Correct: SAP HANA Log Disk: The disk is named -log (LUN 7, 2048 GB, Premium_LRS), indicating it’s for HANA transaction logs. Write Accelerator is specifically designed to optimize write latency for log volumes in HANA deployments. VM and Disk Support: Write Accelerator is supported on M-series VMs (e.g., Standard_M64ms) with Premium SSDs, matching this configuration. Performance Benefit: Reduces latency for log writes, ensuring HANA’s durability and performance requirements (e.g., fast log commits) are met. AZ-120 Relevance: Enabling Write Accelerator for HANA log disks is a standard recommendation in Microsoft’s SAP-on-Azure documentation, frequently tested in the exam.
43
This question requires that you evaluate the underlined text to determine if it is correct. You have an Azure resource group that contains the virtual machines for an SAP environment. You must be assigned the Contributor role to grant permissions to the resource group. Instructions: Review the underlined text. If it makes the statement correct, select `No change is needed`. If the statement is incorrect, select the answer choice that makes the statement correct. A No change is needed B User Access Administrator C Managed Identity Contributor D Security Admin
Final Answer B. User Access Administrator Why "User Access Administrator" Is Correct User Access Administrator Role: This role grants permissions to manage access to Azure resources at a specified scope (e.g., resource group, subscription). It includes the Microsoft.Authorization/* actions, allowing the user to assign RBAC roles (e.g., Contributor, Reader) to others. Scope: If assigned at the resource group level, it enables you to grant permissions to that resource group, meeting the statement’s intent. Corrected Statement: “You must be assigned the User Access Administrator role to grant permissions to the resource group” makes the statement accurate. SAP Relevance: In an SAP on Azure deployment, managing access to resource groups (e.g., for admins or service principals) is a common task, and User Access Administrator is the appropriate role for this purpose.
44
HOTSPOT - Your on-premises network contains SAP and non-SAP applications. You have JAVA-based SAP systems that use SPNEGO for single-sign on (SSO) authentication. Your external portal uses multi-factor authentication (MFA) to authenticate users. You plan to extend the on-premises authentication features to Azure and to migrate the SAP applications to Azure. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area ------------ Statements 1. Azure Active Directory (Azure AD) pass-through authentication can be used to enable MFA for on-premises users. [ ] Yes [ ] No 2. Azure Active Directory (Azure AD) password hash synchronization ensures that users can use their on-premise credentials to authenticate to ABAP-based SAP systems on Azure. [ ] Yes [ ] No 3. Active Directory Federation Services (AD FS) can be used to enable MFA for on-premises users. [ ] Yes [ ] No
Correct Answers: Azure Active Directory (Azure AD) pass-through authentication can be used to enable MFA for on-premises users: Yes Azure Active Directory (Azure AD) password hash synchronization ensures that users can use their on-premise credentials to authenticate to ABAP-based SAP systems on Azure: No Active Directory Federation Services (AD FS) can be used to enable MFA for on-premises users: Yes Statement 1: Azure Active Directory (Azure AD) pass-through authentication can be used to enable MFA for on-premises users. Analysis: Azure AD Pass-Through Authentication (PTA): PTA allows users to sign in to Azure AD-integrated apps using their on-premises AD credentials. Authentication requests are passed to on-premises domain controllers via an agent, without storing password hashes in Azure AD. MFA Capability: Azure AD supports MFA (e.g., via Microsoft Authenticator) for cloud-based authentication. With PTA, MFA can be enforced in Azure AD for users authenticating to Azure resources, even if their credentials are validated on-premises. On-Premises Users Context: The statement implies enabling MFA for on-premises users accessing Azure resources post-migration. PTA supports this by integrating on-premises credentials with Azure AD’s MFA policies. SAP Nuance: JAVA-based SAP systems with SPNEGO (Kerberos-based SSO) don’t directly use PTA, but the external portal (with MFA) could leverage it. Answer: Yes Why Correct: PTA enables on-premises credentials to work with Azure AD, where MFA can be applied for users accessing Azure-hosted apps, aligning with extending authentication features. Statement 2: Azure Active Directory (Azure AD) password hash synchronization ensures that users can use their on-premise credentials to authenticate to ABAP-based SAP systems on Azure. Analysis: Azure AD Password Hash Synchronization (PHS): PHS syncs hashed passwords from on-premises AD to Azure AD via Azure AD Connect, allowing users to use the same credentials for cloud apps. ABAP-Based SAP Systems: ABAP systems typically use SAP GUI or web interfaces and support SSO via Kerberos (SPNEGO) or SAML, not direct password-based authentication against Azure AD. PHS enables credential sync to Azure AD, but ABAP systems on Azure usually integrate with AD via domain joining or SAML, not Azure AD’s password auth directly. Post-Migration: Migrated ABAP systems would likely use on-premises AD (via ExpressRoute) or SAML with Azure AD, not PHS alone. PHS doesn’t “ensure” ABAP authentication, as it’s not the primary mechanism. Answer: No Why Correct: PHS supports cloud app auth, but ABAP systems rely on AD or SAML SSO, not Azure AD password auth, making this statement inaccurate for SAP context. Statement 3: Active Directory Federation Services (AD FS) can be used to enable MFA for on-premises users. Analysis: AD FS: ADFS provides federated SSO between on-premises AD and Azure AD, allowing users to authenticate with on-premises credentials. It supports MFA natively (e.g., via certificates, Azure MFA integration). On-Premises Users Context: ADFS can enforce MFA for on-premises users accessing federated apps (e.g., Azure resources, external portal) by adding MFA rules at the ADFS level. SAP and Portal Fit: Post-migration, ADFS could enable MFA for the external portal (already using MFA) and potentially for JAVA-based SAP systems (via SAML), extending on-premises auth features. Answer: Yes Why Correct: ADFS supports MFA for on-premises users accessing federated services, a valid approach for extending authentication to Azure.
45
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area ------------ Statements 1. Azure AD Connect is required to sign into Linux virtual machines hosted in Azure. [ ] Yes [ ] No 2. An SAP application server that runs on a Linux virtual machine in Azure must be joined to Active Directory. [ ] Yes [ ] No 3. Before you can sign into an SAP application server that runs on a Linux virtual machine in Azure, you must create a Managed Service Identity (MSI). [ ] Yes [ ] No
Final Answer Statements: Azure AD Connect is required to sign into Linux virtual machines hosted in Azure: No An SAP application server that runs on a Linux virtual machine in Azure must be joined to Active Directory: No Before you can sign into an SAP application server that runs on a Linux virtual machine in Azure, you must create a Managed Service Identity (MSI): No 1. Azure AD Connect is required to sign into Linux virtual machines hosted in Azure: No What is Azure AD Connect? Azure AD Connect is a tool that synchronizes on-premises Active Directory (AD) identities to Azure Active Directory (Azure AD) for hybrid identity management, enabling single sign-on (SSO) to Azure services. Context: Signing into a Linux VM in Azure typically involves SSH (Secure Shell) using a username/password or SSH key pair, configured during VM creation. Why No: Authentication Mechanism: Azure VMs (Linux or Windows) don’t require Azure AD Connect for direct sign-in. SSH access to Linux VMs uses local credentials or keys, not Azure AD credentials by default. Azure AD Role: While Azure AD can enable SSO for applications or integrate with Azure AD Domain Services (AAD DS) for domain-joined VMs, Azure AD Connect isn’t a prerequisite for basic VM login. Linux Specifics: Linux VMs can integrate with AD (e.g., via SSSD or Kerberos), but this is optional and unrelated to Azure AD Connect, which syncs identities to the cloud, not to VM login directly. AZ-120 Relevance: The exam tests Azure AD integration with SAP, but VM sign-in is a distinct process, and Azure AD Connect isn’t required for Linux VM access. 2. An SAP application server that runs on a Linux virtual machine in Azure must be joined to Active Directory: No Context: SAP application servers (e.g., NetWeaver ABAP/Java) on Linux VMs handle SAP workloads and may need user authentication for SAP clients or admin access. Active Directory Integration: Joining a Linux VM to AD (e.g., using tools like SSSD, Samba, or Realm) allows centralized user management and authentication (e.g., LDAP, Kerberos). Why No: Not Mandatory: SAP application servers don’t inherently require AD integration to function. They can operate with local users, SAP-specific authentication (e.g., SAP Logon), or alternative identity providers (e.g., SAML with Azure AD). SAP Authentication: SAP systems typically use their own user management (e.g., SU01) or integrate with external IdPs (e.g., Azure AD via SAML) without needing the VM to be domain-joined. Flexibility: While AD integration is common for enterprise SAP environments (e.g., for SSO with Windows-based SAP GUI clients), it’s not a strict requirement—other methods (e.g., local auth, SAP HANA DB auth) suffice. AZ-120 Relevance: The exam focuses on SAP flexibility in Azure; AD joining is optional, not mandatory, for Linux-based SAP app servers. 3. Before you can sign into an SAP application server that runs on a Linux virtual machine in Azure, you must create a Managed Service Identity (MSI): No What is Managed Service Identity (MSI)? MSI (now called Managed Identities) is an Azure feature that provides an automatically managed identity in Azure AD for Azure resources (e.g., VMs) to authenticate to Azure services (e.g., Key Vault, Storage) without credentials. Context: “Sign into” here likely refers to admin access to the VM (e.g., SSH) or SAP application access (e.g., SAP GUI, web). Why No: VM Sign-In: SSH access to a Linux VM uses local credentials or keys, not MSI. MSI is for resource-to-resource authentication (e.g., VM accessing Blob Storage), not human login to the VM. SAP Application Sign-In: SAP app server access (e.g., via SAP Logon) uses SAP user credentials or SSO (e.g., via Azure AD), not MSI. MSI isn’t involved in user authentication to SAP. Purpose Mismatch: MSI enables the VM to authenticate to Azure services, not users signing into the VM or SAP application. It’s optional and unrelated to the sign-in process. AZ-120 Relevance: The exam tests Managed Identities for SAP integration with Azure services (e.g., backups, monitoring), but not for VM or SAP app login.
46
HOTSPOT - You are integrating SAP HANA and Azure Active Directory (Azure AD). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area ------------ Statements 1. SAP HANA supports SAML authentication for single-sign on (SSO). [ ] Yes [ ] No 2. SAP HANA supports OAuth2 authentication for single-sign on (SSO). [ ] Yes [ ] No 3. You can use Azure role-based access control (RBAC) to provide users with the ability to sign in to SAP HANA. [ ] Yes [ ] No
Final Answer 1. SAP HANA supports SAML authentication for single-sign on (SSO): Yes 2. SAP HANA supports OAuth2 authentication for single-sign on (SSO): No 3. You can use Azure role-based access control (RBAC) to provide users with the ability to sign in to SAP HANA: No 1. SAP HANA supports SAML authentication for single-sign on (SSO) Answer: Yes Why it’s correct: SAML Support: SAP HANA supports SAML (Security Assertion Markup Language) for single sign-on (SSO). Since SAP HANA 1.0 SPS 10, it has included SAML 2.0 capabilities, allowing it to act as a service provider (SP) that integrates with an identity provider (IdP) like Azure AD. Mechanism: SAML enables users to authenticate to SAP HANA using credentials managed by Azure AD, eliminating the need for separate logins. This is configured via SAP HANA’s security settings (e.g., in HANA Cockpit or XS Admin tools) by exchanging metadata with Azure AD (set up as an enterprise application). Azure AD Context: Azure AD supports SAML 2.0, making it a compatible IdP for SAP HANA SSO, a common integration scenario for SAP workloads on Azure. AZ-120 Relevance: SAML-based SSO is a standard and supported method for integrating SAP HANA with Azure AD, making this statement true. 2. SAP HANA supports OAuth2 authentication for single-sign on (SSO) Answer: No Why it’s incorrect: OAuth2 Scope: OAuth2 is an authorization framework primarily used for token-based access to APIs and resources, not for traditional SSO to a database like SAP HANA. While SAP HANA supports OAuth for specific use cases (e.g., securing REST APIs or XS applications in SAP HANA Extended Application Services), this is not equivalent to SSO for user authentication to the HANA database itself (e.g., via HANA Studio, JDBC, or ODBC). SAP HANA SSO: For SSO, SAP HANA relies on protocols like SAML, Kerberos, or X.509 certificates, not OAuth2. OAuth2 tokens are application-centric and not designed for direct database user logins. Azure AD Context: Although Azure AD supports OAuth2 for many applications, SAP HANA’s SSO integration with Azure AD uses SAML, not OAuth2. AZ-120 Relevance: OAuth2 isn’t a native SSO method for SAP HANA database access, making this statement false. 3. You can use Azure role-based access control (RBAC) to provide users with the ability to sign in to SAP HANA Answer: No Why it’s incorrect: RBAC Purpose: Azure RBAC (Role-Based Access Control) is an Azure authorization system that manages permissions to Azure resources (e.g., VMs, Key Vaults) at the management plane. It assigns roles like “Contributor” or “Reader” to control what users or services can do with Azure resources. SAP HANA Authentication: Signing in to SAP HANA requires database-level authentication (e.g., via SAML, Kerberos, or username/password), managed within SAP HANA’s own security model (e.g., HANA users and roles). RBAC operates at the Azure infrastructure level and does not integrate with SAP HANA’s authentication system to enable sign-ins. Distinction: While RBAC can grant a user permission to manage the Azure VM hosting SAP HANA (e.g., start/stop), it doesn’t provide the ability to log in to the HANA database itself. That requires Azure AD SSO (e.g., via SAML) or HANA-specific credentials, not RBAC. AZ-120 Relevance: RBAC is critical for Azure resource management in SAP deployments, but it doesn’t control SAP HANA sign-ins, making this statement false.
47
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Answer Area ------------ Statements 1. The Azure Enhanced Monitoring Extension for SAP stores performance data in an Azure Storage account. [ ] Yes [ ] No 2. You can enable the Azure Enhanced Monitoring Extension for SAP on a SUSE Linux Enterprise Server 12 (SLES 12) server by running the Set-AzVMAEMExtension cmdlet. [ ] Yes [ ] No 3. You can enable the Azure Enhanced Monitoring Extension for SAP on a server that runs Windows Server 2016 by running the Set-AzVMAEMExtension cmdlet. [ ] Yes [ ] No
Correct Answers: The Azure Enhanced Monitoring Extension for SAP stores performance data in an Azure Storage account: Yes You can enable the Azure Enhanced Monitoring Extension for SAP on a SUSE Linux Enterprise Server 12 (SLES 12) server by running the Set-AzVMAEMExtension cmdlet: Yes You can enable the Azure Enhanced Monitoring Extension for SAP on a server that runs Windows Server 2016 by running the Set-AzVMAEMExtension cmdlet: Yes Statement 1: The Azure Enhanced Monitoring Extension for SAP stores performance data in an Azure Storage account. Analysis: Azure Enhanced Monitoring Extension (AEM): This extension enhances monitoring for SAP workloads on Azure VMs by collecting OS and SAP-specific metrics (e.g., CPU, memory, network). It integrates with the Azure Diagnostics Extension (WAD/LAD) on VMs. Storage Mechanism: The AEM Extension feeds data into SAP systems (e.g., via /usr/sap/hostctrl/exe/sapcos), but the underlying Diagnostics Extension stores collected performance data in an Azure Storage account (specified during configuration). SAP on Azure: Microsoft documentation confirms that VM diagnostics data, including AEM metrics, is persisted to Azure Storage for analysis or integration with tools like Azure Monitor. Answer: Yes Why Correct: AEM leverages the Diagnostics Extension, which writes performance data to an Azure Storage account, a standard behavior for VM monitoring on Azure. Statement 2: You can enable the Azure Enhanced Monitoring Extension for SAP on a SUSE Linux Enterprise Server 12 (SLES 12) server by running the Set-AzVMAEMExtension cmdlet. Analysis: AEM Extension on SLES: SLES 12 is a supported OS for SAP workloads on Azure (SAP Note 1984787). The AEM Extension is compatible with Linux, including SLES, to provide SAP-specific monitoring. Set-AzVMAEMExtension Cmdlet: This PowerShell cmdlet deploys and configures the AEM Extension on Azure VMs. It supports both Windows and Linux VMs by specifying the -OSType parameter (e.g., Linux for SLES). Execution: Running Set-AzVMAEMExtension on a SLES 12 VM installs the extension (Linux version: AzureEnhancedMonitoring) and enables monitoring, assuming prerequisites (e.g., VM agent) are met. Answer: Yes Why Correct: The cmdlet is designed to enable AEM on supported Linux OSes like SLES 12, a key configuration step for SAP on Azure VMs. Statement 3: You can enable the Azure Enhanced Monitoring Extension for SAP on a server that runs Windows Server 2016 by running the Set-AzVMAEMExtension cmdlet. Analysis: AEM Extension on Windows: Windows Server 2016 is supported for SAP workloads (SAP Note 1928533), and the AEM Extension works on Windows VMs to collect SAP metrics. Set-AzVMAEMExtension Cmdlet: The cmdlet supports Windows VMs (e.g., -OSType Windows), installing the Windows version of the AEM Extension (AzureEnhancedMonitoringforSAP). Execution: Running this on a Windows Server 2016 VM enables the extension, integrating with SAP host agent and Azure diagnostics. Answer: Yes Why Correct: The cmdlet is compatible with Windows Server 2016, enabling AEM for SAP monitoring, as per Azure SAP documentation.
48
You have an on-premises SAP landscape that contains an IBM DB2 database. You need to recommend a solution to migrate the landscape to Azure and the database to SAP HANA. The solution must meet the following requirements: * Be supported by SAP. * Minimize downtime. What should you include in the recommendation? A SAP Database Migration Option (DMO) with System Move B Azure Database Migration Service C Azure Import/Export service D Azure Data Box Gateway
Final Answer What should you include in the recommendation? A. SAP Database Migration Option (DMO) with System Move A. SAP Database Migration Option (DMO) with System Move What it is: DMO is a feature of SAP’s Software Update Manager (SUM) that migrates and upgrades SAP systems, including converting the database to SAP HANA. “System Move” extends DMO by allowing the SAP system to be moved to a different host (e.g., from on-premises to Azure) during the migration. How it Works: Performs a combined database migration (DB2 to HANA) and system move to Azure. Uses export/import of database content with optimization techniques (e.g., downtime-optimized DMO) to minimize outage. Typically involves staging the HANA database in Azure, exporting DB2 data, transferring it, and importing it into HANA. Why Correct: SAP Support: DMO is an SAP-provided, certified tool for database migrations to HANA, fully supported for DB2-to-HANA conversions. Minimize Downtime: Offers options like downtime-optimized DMO, which pre-processes data during uptime, reducing the offline window to hours or less (e.g., by using R3load parallelism and delta replication). Azure Fit: System Move supports moving the SAP system to Azure VMs or HANA Large Instances, aligning with SAP-on-Azure deployments.
49
You have an on-premises SAP landscape that contains a 20-TB IBM DB2 database. The database contains large tables that are optimized for read operations via secondary indexes. You plan to migrate the database platform to SQL Server on Azure virtual machines. You need to recommend a database migration approach that minimizes the time of the export stage. What should you recommend? A log shipping B deleting secondary indexes C SAP Database Migration Option (DMO) in parallel transfer mode D table splitting
Final Answer C. SAP Database Migration Option (DMO) in parallel transfer mode Why SAP DMO in Parallel Transfer Mode Is Correct SAP Database Migration Option (DMO): DMO is an SAP tool integrated into the Software Update Manager (SUM) that facilitates database migrations, including heterogeneous migrations (e.g., DB2 to SQL Server). It automates the export of the source database and import into the target, often with downtime minimization features. Parallel Transfer Mode: DMO supports parallel processing, where multiple R3load processes run concurrently to export data from the source database. This parallelism significantly reduces the export time by distributing the workload across multiple threads or processes. Impact on Export Stage: For a 20-TB database with large tables, parallel transfer mode leverages available CPU and I/O resources to extract data faster. The secondary indexes, while optimized for reads, don’t directly hinder the export, but parallel processing ensures large tables are handled efficiently. DMO generates export files (e.g., in a compressed format) that are then transferred to Azure and imported into SQL Server, with the export stage optimized by parallelism. SAP on Azure: DMO is a recommended tool by SAP and Microsoft for migrating SAP databases to Azure, especially for large datasets, as it balances speed and reliability. Time Minimization: Compared to sequential exports, parallel mode can cut export time significantly (e.g., by a factor of the number of parallel processes), directly meeting the goal.
50
You have an on-premises third-party enterprise resource planning (ERP) system that uses Microsoft SQL Server 2016. You plan to migrate the ERP system to SAP Business Suite on SAP HANA on Azure virtual machines. You need to identify the appropriate sizing for Business Suite on HANA. What should you use? A SAP Quick Sizer for HANA Cloud B HANA Cockpit C SAP Quick Sizer for HANA D SAP Cloud Platform Cockpit
Correct Answer: C. SAP Quick Sizer for HANA Why This Answer Is Correct for AZ-120: Sizing Purpose: The AZ-120 exam tests planning SAP workloads on Azure, including sizing for HANA-based systems. SAP Quick Sizer for HANA is the standard tool for estimating resource needs (e.g., RAM for HANA’s in-memory processing, storage for data/logs) when migrating to Business Suite on HANA, regardless of the source system (here, a third-party ERP on SQL Server). Migration Context: Moving from a non-SAP ERP to SAP Business Suite on HANA requires defining a new system’s workload (e.g., users, transactions). Quick Sizer takes inputs like expected usage and data volume (e.g., inferred from the SQL Server system) to recommend Azure VM SKUs (e.g., M-Series). Azure Integration: SAP and Microsoft endorse Quick Sizer for Azure deployments, mapping results to certified VM types (SAP Note 2529073, Azure SAP guides). Why Not the Others? HANA Cloud (A): For SAP’s managed HANA service, not Azure VMs. HANA Cockpit (B): For post-deployment management, not sizing. SAP Cloud Platform Cockpit (D): For SAP BTP, not Azure HANA sizing.
51
You plan to migrate an on-premises SAP development system to Azure. Before the migration, you need to check the usage of the source system hardware, such as CPU, memory, network, etc. Which transaction should you run from SAP GUI? A SM51 B DB01 C DB12 D ST06
Final Answer Which transaction should you run from SAP GUI? D. ST06 Why Correct: Hardware Metrics: Provides exactly what’s needed—CPU load (e.g., % usage), physical/virtual memory usage, network throughput (e.g., bytes in/out), and more. Pre-Migration Use: Essential for assessing the source system’s resource utilization to size the target Azure environment (e.g., VM type like M-series, network bandwidth). Granularity: Offers current and historical data (e.g., last 24 hours), aiding in capacity planning. SAP Standard: A widely used SAP transaction for system monitoring, applicable to any SAP system (development, production, etc.).
52
HOTSPOT - You plan to deploy a scale-out SAP HANA deployment on Azure virtual machine that will contain a standby node. You need to recommend a storage solution for the deployment. What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area ------------ Global transport directory: [ ▼ ] Azure shared disks Azure NetApp Files Azure Premium Files HANA database and log: [ ▼ ] Azure shared disks Azure NetApp Files Azure Premium Files
Final Answer Global transport directory: Azure NetApp Files HANA database and log: Azure NetApp Files Global transport directory: Azure NetApp Files Why it’s correct: Requirement: The global transport directory (/hana/shared) must be a shared file system accessible by all HANA nodes, including the standby, in a scale-out deployment. It stores executables, configuration files, and trace files, requiring high availability and low latency. Azure NetApp Files (ANF): ANF is an enterprise-grade, NFS-based file storage service in Azure, fully supported for SAP HANA deployments. It provides a shared NFS mount point (e.g., NFS 4.1) that all VMs can access, making it ideal for /hana/shared in scale-out scenarios. Certified by SAP for HANA, ANF meets the performance and reliability needs (e.g., low latency, high throughput) for shared directories. In scale-out with a standby node, ANF ensures the standby can seamlessly access the shared directory during failover. HANA database and log: Azure NetApp Files Why it’s correct: Requirement: The HANA database (/hana/data) and log (/hana/log) volumes require high-performance storage with low latency and high IOPS, especially for write-intensive logs and read/write data operations. In a scale-out deployment, each node typically has its own local data and log volumes, but the standby node must support rapid failover. Azure NetApp Files (ANF): ANF provides NFS-based storage with ultra-low latency and high throughput, meeting SAP HANA’s stringent performance requirements (e.g., 250 MB/s write throughput for logs). Certified by SAP for /hana/data and /hana/log on Azure VMs, ANF supports the IOPS and bandwidth needed for HANA workloads. In scale-out, each active node uses ANF volumes for its local data and log, while the standby node can leverage ANF’s snapshot and replication features for HA, ensuring quick recovery.
53
Your company has an on-premises SAP environment. Recently, the company split into two companies named Litware, Inc. and Contoso, Ltd. Litware retained the SAP environment. Litware plans to export data that is relevant only to Contoso. The export will be 1.5 TB. Contoso builds a new SAP environment on Azure. You need to recommend a solution for Litware to make the data available to Contoso in Azure. The solution must meet the following requirements: * Minimize the impact on the network. * Minimize the administrative effort for Litware. What should you include in the recommendation? A Azure Import/Export service B Azure Migrate C Azure Data Box D Azure Site Recovery
Correct Answer: C. Azure Data Box Why This Answer Is Correct for AZ-120: Minimize Network Impact: Azure Data Box transfers 1.5 TB offline via a physical device, avoiding network congestion or slow uploads (e.g., vs. internet transfer at 100 Mbps taking ~34 hours). This meets the primary requirement. Minimize Administrative Effort: Microsoft provides the Data Box Disk, handles shipping, and uploads the data to Azure Blob Storage. Litware’s effort is limited to copying data to the device (e.g., using rsync or a GUI tool), far less than managing network transfers or replication. SAP Context: Exporting 1.5 TB from an SAP system (e.g., via database export or file extract) to Azure Blob Storage is practical, and Contoso can then import it into its SAP environment (e.g., via HANA tools). Why Not the Others? Azure Import/Export (A): Similar but less streamlined; Litware must source and prepare disks, increasing effort vs. Data Box’s turnkey solution. Azure Migrate (B): For workload migration, not data export; high network and effort impact. Azure Site Recovery (D): Network-intensive and effort-heavy; meant for DR, not data sharing.
54
HOTSPOT - You have an existing on-premises SAP landscape that is hosted on VMware VSphere. You plan to migrate the landscape to Azure. You configure the Azure Site Recovery replication policy shown in the following exhibit. Default Policy ✎ Edit settings 🔗 Associate 🗑 Delete Replication settings Source type: VMware/Physical machines Target type: Azure RPO threshold: 60 Minutes Recovery point retention: 24 Hours App consistent snapshot frequency: 120 Minutes Associated Configuration Servers Name Association status Config01 ✅ Associated Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Answer Area During the migration, you can fail over to a recovery point taken up to [Dropdown] ago. 60 minutes 120 minutes 24 hours 0 minutes After a planned failover, up to the last [Dropdown] of SAP data might be lost. 60 minutes 120 minutes 24 hours 0 minutes
Final Answer During the migration, you can fail over to a recovery point taken up to: 24 hours After a planned failover, up to the last: 60 minutes of SAP data might be lost 1. During the migration, you can fail over to a recovery point taken up to [Dropdown] ago Answer: 24 hours Why it’s correct: Recovery Point Retention: The policy specifies a “Recovery point retention” of 24 hours. This setting determines how far back in time you can select a recovery point for failover. ASR retains recovery points (snapshots of replicated data) for the specified duration, allowing you to fail over to any point within that window. Failover Context: During migration, ASR replicates the VMware-based SAP landscape to Azure. If you perform a failover (planned or unplanned), you can choose a recovery point from up to 24 hours ago, assuming replication has been running long enough to accumulate that history. SAP Relevance: For SAP, this means you can roll back to a consistent state (crash-consistent or app-consistent) within the last 24 hours, depending on the snapshot type available. Why other options are incorrect: 60 minutes: This is the RPO threshold, not the retention period for recovery points. 120 minutes: This is the app-consistent snapshot frequency, not the maximum retention time. 0 minutes: Implies no recovery points are retained, which contradicts the 24-hour setting. 2. After a planned failover, up to the last [Dropdown] of SAP data might be lost Answer: 60 minutes Why it’s correct: RPO Threshold: The policy specifies an “RPO threshold” of 60 minutes. RPO (Recovery Point Objective) indicates the maximum potential data loss in time between the last replication and a failure (or failover). In ASR, data is replicated continuously (for VMware sources), but the RPO threshold is the acceptable lag between the source and target. For a planned failover, you typically stop the source VMs, perform a final synchronization, and then fail over. If fully synchronized, data loss can be zero. However, the question implies the worst-case scenario under the policy’s RPO threshold, which is 60 minutes if replication lags (e.g., due to network issues or incomplete final sync). SAP Context: For SAP data (e.g., database transactions), up to 60 minutes of changes could be lost if the last replication occurred 60 minutes before the failover, assuming no final sync completes perfectly (a conservative interpretation). ASR Behavior: While planned failovers aim for minimal data loss, the RPO threshold defines the policy’s guarantee, making 60 minutes the correct answer in this context. Why other options are incorrect: 120 minutes: This is the app-consistent snapshot frequency, which determines how often application-consistent snapshots are taken (e.g., with SAP HANA quiesced), not the potential data loss window. 24 hours: This is the recovery point retention, not the RPO or data loss metric. 0 minutes: Implies no data loss, which is possible in an ideal planned failover with full sync, but the RPO threshold of 60 minutes sets the policy’s maximum loss expectation.
55
You have an on-premises deployment of SAP on DB2. You plan to migrate the deployment to Azure and Microsoft SQL Server 2017. What should you use to migrate the deployment? A db2haicu B SQL Server Migration Assistant (SSMA) C DSN1COPY D Azure SQL Data Sync
Final Answer What should you use to migrate the deployment? B. SQL Server Migration Assistant (SSMA)
56
You have an on-premises SAP production landscape. You plan to migrate to SAP on Azure. You need to generate an SAP Early Watch Alert report. What should you use? A Azure Advisor B SAP HANA Cockpit C SAP Software Provisioning Manager D SAP Solution Manager
Correct Answer: D. SAP Solution Manager Why This Answer Is Correct for AZ-120: EWA Report Purpose: The SAP Early Watch Alert report provides a comprehensive health check of an SAP system (e.g., performance, resource usage, potential issues), critical for pre-migration planning to Azure. It’s generated from data collected by the SAP system and processed in Solution Manager. SAP Solution Manager Role: SolMan is the only tool listed that generates EWA reports. It requires configuration (e.g., connecting the production system to SolMan via RFC) and regular execution (e.g., via transaction SOLMAN_WORKCENTER or SDCCN). Migration Context: For AZ-120, generating an EWA report pre-migration helps size the Azure environment (e.g., VM type, storage) by analyzing the on-premises landscape’s workload, a best practice per SAP Note 1928533 and Azure SAP guides. Why Not the Others? Azure Advisor (A): Azure-specific, post-deployment tool; irrelevant for on-premises SAP. SAP HANA Cockpit (B): HANA-specific management, not EWA reporting. SWPM (C): Installation tool, not for monitoring or reports. Exam Relevance: AZ-120 tests SAP tools for migration planning, and SolMan with EWA is a standard approach for assessing on-premises SAP systems.
57
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements Yes No Oracle Data Guard can be used to provide high availability of SAP databases on Azure. ( ) ( ) You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016. ( ) ( ) You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12). ( ) ( )
Final Answer Oracle Data Guard can be used to provide high availability of SAP databases on Azure: Yes You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016: No You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12): Yes 1. Oracle Data Guard can be used to provide high availability of SAP databases on Azure Answer: Yes Why it’s correct: Oracle Data Guard: This is Oracle’s high-availability (HA) and disaster recovery (DR) solution, which provides data replication between a primary database and one or more standby databases (physical or logical). It supports features like real-time apply and automatic failover. SAP on Azure: SAP supports Oracle as a database platform, and Oracle Data Guard is a certified HA solution for SAP workloads. On Azure, you can deploy Oracle Data Guard across virtual machines (VMs) in different Availability Zones or regions to ensure HA (e.g., using a primary VM in one zone and a standby in another). Azure Context: Microsoft’s SAP on Azure documentation confirms that Oracle Data Guard is a viable HA strategy, leveraging Azure’s infrastructure (e.g., VNets, Availability Zones) to meet SAP’s uptime requirements. AZ-120 Relevance: Using Oracle Data Guard for HA is a standard practice for SAP databases on Azure, making this statement true. 2. You can host SAP databases on Azure by using Oracle on a virtual machine that runs Windows Server 2016 Answer: No Why it’s incorrect: SAP Certification: SAP certifies specific operating systems for running Oracle databases with SAP workloads. Historically, SAP supports Oracle on Unix-based systems (e.g., Solaris, AIX) and certain Linux distributions (e.g., SUSE, Red Hat), but Windows Server is not a certified OS for Oracle in SAP environments. Oracle on Windows: While Oracle Database can technically run on Windows Server 2016 for non-SAP workloads, SAP’s certification matrix (e.g., SAP Note 1565179) and Microsoft’s SAP on Azure guidelines specify Linux (e.g., SLES, RHEL) as the supported OS for Oracle-based SAP databases on Azure VMs. Azure Context: Azure supports Oracle VMs, but for SAP, the OS must align with SAP’s support matrix, which excludes Windows Server 2016 for Oracle. AZ-120 Relevance: Windows Server 2016 isn’t a supported OS for Oracle in SAP deployments on Azure, making this statement false. 3. You can host SAP databases on Azure by using Oracle on a virtual machine that runs SUSE Linux Enterprise Server 12 (SLES 12) Answer: Yes Why it’s correct: SAP Certification: SUSE Linux Enterprise Server 12 (SLES 12) is a fully supported and certified operating system for running Oracle databases with SAP workloads, as per SAP’s certification matrix and SUSE’s partnership with SAP. Azure Support: Azure offers SLES 12 images in the Marketplace, and Microsoft certifies SLES 12 for SAP HANA and other SAP databases (including Oracle) on Azure VMs. This includes support for HA configurations like Oracle Data Guard. Practicality: Many SAP customers deploy Oracle on SLES 12 (or later versions like SLES 15) on Azure due to its stability, performance, and SAP-specific optimizations. AZ-120 Relevance: SLES 12 is a standard and supported OS for Oracle-based SAP databases on Azure, making this statement true.
58
You migrate an on-premises instance of SAP HANA that runs SUSE Linux Enterprise Server (SLES) to an Azure virtual machine. You project that in two years, you will replace the virtual machine with a larger virtual machine within the same flexibility group. You need to recommend solutions to minimize HANA deployment costs during the next three years. The solutions must not affect the availability SLAs. Which two solutions should you recommend? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A Azure Spot instance B a three-year reservation that has instance size flexibility C a one-year reservation that has capacity priority D Azure Hybrid Benefit E a one-year reservation that has instance size flexibility
Final Answer Which two solutions should you recommend? B. A three-year reservation that has instance size flexibility D. Azure Hybrid Benefit Why “B” and “D” Are Correct? B. Three-year reservation with instance size flexibility: Complete Solution: Covers all three years with significant discounts (e.g., 60-72% off M-series VMs), supports the size change in year two (e.g., M64s to M128s), and ensures capacity for SLAs. SAP HANA Fit: Ideal for long-term SAP HANA deployments on Azure VMs (e.g., M-series), a common AZ-120 scenario. D. Azure Hybrid Benefit: Complete Solution: Reduces costs continuously over three years by leveraging existing SLES subscriptions (common for SAP HANA), stacking with reservations for maximum savings. SAP HANA Fit: Supported for SLES-based HANA VMs, maintaining SLAs with no operational impact. Combined Benefit: Together, they maximize savings (reservation discount + hybrid benefit) without affecting availability, addressing both the initial VM and the replacement. AZ-120 Relevance: Reflects the exam’s focus on cost optimization strategies (reservations, hybrid benefits) for SAP workloads while ensuring HA/DR compliance.
59
DRAG DROP - You have an on-premises SAP landscape that uses a DB2 database and contains an SAP Financial Accounting (SAP FIN) deployment. The deployment contains a file share that stores 50 TB of bitmap files. You plan to migrate the on-premises SAP landscape to SAP HANA on Azure (Large Instances) and Azure Files shares. The solution must meet the following requirements: * Minimize downtime. * Minimize administrative effort. You need to recommend a migration solution. What should you recommend for each resource? To answer, drag the appropriate services to the correct resources. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. Services Azure Data Box Gateway Azure Database Migration Service Azure Migrate Data Migration Assistant SAP Database Migration Option (DMO) with System Move Answer Area Database: [________] File share: [________]
Recommended Solution: Database: SAP Database Migration Option (DMO) with System Move File share: Azure Data Box Gateway
60
You have an on-premises SAP NetWeaver application server and SAP HANA database deployment. You plan to migrate the on-premises deployment to Azure. You provision new Azure virtual machines to host the application server and database roles. You need to initiate SAP Database Migration Option (DMO) with System Move. On which server should you start Software Update Manager (SUM)? A the virtual machine that will host the application server B the virtual machine that will host the database C the on-premises database server D the on-premises application server
Final Answer D. the on-premises application server Why the On-Premises Application Server Is Correct SUM Execution: SUM is typically started on the Primary Application Server (PAS) of the SAP system because it manages the SAP instance and coordinates the migration process, including the export of application data and database content. In an on-premises SAP NetWeaver deployment, the application server (PAS) hosts the SAP instance (e.g., ABAP stack) and is where SUM is installed and executed for DMO. DMO with System Move Process: You start SUM on the on-premises application server to export the SAP system (including the database schema and content from the on-premises HANA server). The export files are then transferred to Azure (e.g., via Azure Data Box or network transfer). After the move, SUM is resumed on the target Azure application server to import the data into the new HANA database on Azure. SAP Guidance: SAP documentation (e.g., DMO Guide) and Microsoft’s SAP on Azure migration guides specify that DMO with System Move begins on the source application server to orchestrate the export from the existing landscape.
61
You have an on-premises SAP NetWeaver deployment. The deployment has a DB2 data store that contains a 5-TB SAP database. You plan to migrate the deployment to SQL Server on an Azure virtual machine. You need to optimize the performance of transaction log write operations during the migration. The solution must NOT affect the I/O quota of the virtual machine. What should you do? A Place the transaction logs on the temporary disk. B Place the transaction logs on a striped volume of Premium SSD disks. C Place the transaction logs on an Ultra disk. D Enable the write cache for the disk that hosts the transaction logs.
Final Answer What should you do? C. Place the transaction logs on an Ultra disk Why “Place the transaction logs on an Ultra disk” is Correct? Performance Optimization: Ultra Disks deliver low-latency, high-IOPS write performance, critical for SQL Server transaction logs during migration (e.g., bulk data imports). I/O Quota Compliance: Their independent IOPS/throughput (e.g., 20,000 IOPS configured separately) don’t consume the VM’s base quota, meeting the “not affect” requirement. SQL Server Fit: Provides durable, persistent storage, aligning with SQL Server’s log requirements, unlike temporary disks or caching.
62
You have an SAP landscape on Azure that contains the virtual machines shown in the following table. You need to ensure that the Application Server role is available if a single Azure datacenter fails. Name Role Azure Availability Zone in East US SAPAPP1 Application Server Zone 1 SAPAPP2 Application Server Zone 2 What should you include in the solution? A a local network gateway B Azure Load Balancer Standard C Azure Virtual WAN D Azure Active Directory (Azure AD) Application Proxy
Correct Answer: B. Azure Load Balancer Standard Why This Answer Is Correct for AZ-120: Zone-Redundant HA: The Azure Load Balancer Standard supports Availability Zones, allowing it to distribute traffic to SAPAPP1 (Zone 1) and SAPAPP2 (Zone 2). If Zone 1 fails, it automatically redirects traffic to Zone 2, ensuring the Application Server role remains available. This is configured with a frontend IP (zone-redundant), backend pool (VMs in both zones), and health probes. SAP Context: For SAP application servers (e.g., NetWeaver dialog instances), Microsoft recommends the Standard Load Balancer for HA across zones, complementing SAP Web Dispatcher for HTTP traffic (Azure SAP workload guide). Minimize Downtime: The load balancer’s failover capability ensures continuous service, a key requirement for SAP production landscapes.
63
DRAG DROP - You have an Azure subscription. You plan to deploy a SAP NetWeaver landscape that will use SQL Server on Azure virtual machines. The solution must meet the following requirements: * The SAP application and database tiers must reside in the same Azure zone. * The application tier in the Azure virtual machines must belong to the same Availability Set. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Actions Create a host group Create a proximity placement group Create an Availability Set Deploy the application tier in the Azure virtual machines Deploy SQL Server on Azure virtual machines Answer Area
Final Answer Create a proximity placement group Create an Availability Set Deploy SQL Server on Azure virtual machines Deploy the application tier in the Azure virtual machines Create a proximity placement group Why it’s correct: A Proximity Placement Group (PPG) ensures that VMs are physically located close to each other within the same Azure data center (and thus the same zone), minimizing network latency. For SAP, low latency between the application tier and database tier is critical (e.g., <1 ms), making PPG a recommended practice. Why first: PPG must be created before deploying VMs because it’s assigned during VM creation to enforce co-location. This ensures both tiers stay in the same zone. SAP on Azure: Microsoft’s SAP documentation recommends PPG for SAP NetWeaver with SQL Server to meet latency requirements. Create an Availability Set Why it’s correct: An Availability Set is required to group the application tier VMs, ensuring they are distributed across fault and update domains within the same zone for HA. This meets the second requirement. Why second: The Availability Set must exist before deploying the application tier VMs, as VMs are assigned to it during creation. It can be created after the PPG because it’s a logical grouping within the zone defined by the PPG. Zone Alignment: Availability Sets are zone-aligned when created with a zonal configuration, ensuring consistency with the PPG. Deploy SQL Server on Azure virtual machines Why it’s correct: Deploying the SQL Server VM(s) for the database tier is a necessary step. By associating it with the PPG, it ensures placement in the same zone as the application tier. Why third: Deploying the database tier before the application tier is a practical sequence, as SAP deployments often prioritize database setup (e.g., for schema creation) before application server installation. However, the reverse order (application first) is also valid, making this a flexible step in correct sequences. Deploy the application tier in the Azure virtual machines Why it’s correct: Deploying the application tier VMs (SAP NetWeaver) completes the landscape. These VMs are assigned to both the PPG (for zone co-location with SQL Server) and the Availability Set (for HA within the application tier), meeting both requirements. Why fourth: This follows the creation of the PPG and Availability Set, ensuring proper placement and HA configuration during deployment.
64
This question requires that you evaluate the underlined text to determine if it is correct. You have an SAP environment on Azure that uses Microsoft SQL Server as the RDBMS. You plan to migrate to an SAP HANA database. To calculate the amount of memory and disk space required for the database, you can use _SAP_Quick_Sizer_. Instructions: Review the underlined text. If it makes the statement correct, select “No change is needed”. If the statement is incorrect, select the answer choice that makes the statement correct. A No change is needed B Azure Migrate C /SDF/HDB_SIZING D SQL Server Management Studio (SSMS)
Final Answer Instructions Response: A. No change is needed Why “No change is needed” is Correct? Accuracy: SAP Quick Sizer is the right tool for estimating HANA memory and disk space, tailored to SAP migrations and Azure planning. SAP Standard: Recognized by SAP and Microsoft as the primary sizing tool for HANA, directly addressing the statement’s intent. Migration Fit: Applicable for moving from SQL Server to HANA, providing actionable Azure resource estimates.
65
You are deploying an SAP production landscape to Azure. Your company's chief information security officer (CISO) requires that the SAP deployment complies with ISO 27001. You need to generate a compliance report for ISO 27001. What should you use? A Azure Log Analytics B Azure Monitor C Azure Active Directory (Azure AD) D Azure Security Center
Correct Answer: D. Azure Security Center Why This Answer Is Correct for AZ-120: ISO 27001 Reporting: Azure Security Center (ASC), now part of Microsoft Defender for Cloud, provides a “Regulatory Compliance” dashboard that includes ISO 27001. It assesses Azure resources (e.g., VMs, storage, network) against the standard’s controls and generates a downloadable compliance report, meeting the CISO’s requirement. SAP on Azure Context: For an SAP production landscape, ASC evaluates security configurations (e.g., encryption, NSGs, RBAC) across the deployment, ensuring ISO 27001 compliance (e.g., A.12 for operations security, A.18 for compliance). Minimized Effort: ASC’s built-in reporting reduces manual work compared to custom solutions in Log Analytics or Monitor, aligning with enterprise needs.
66
HOTSPOT - You have an on-premises deployment of SAP Business Suite on HANA that includes a CPU-intensive application tier and a 20-TB database tier. You plan to migrate to SAP HANA on Azure. You need to recommend a compute option to host the application and database tiers. The solution must minimize cost. What should you recommend for each tier? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Application: [Dropdown menu] Ev3-series of Azure virtual machines HANA on Azure (Large Instances) M-series of Azure virtual machines Database: [Dropdown menu] Ev3-series of Azure virtual machines HANA on Azure (Large Instances) M-series of Azure virtual machines
Final Answer Application: M-series of Azure virtual machines Database: HANA on Azure (Large Instances) Application: M-series of Azure virtual machines Why it’s correct: CPU-Intensive Workload: The application tier (e.g., SAP NetWeaver) requires significant CPU resources for processing business logic, dialog tasks, and batch jobs. M-series VMs offer high vCPU counts (up to 128) and strong compute performance, making them suitable for CPU-intensive SAP application servers. Cost Minimization: M-series VMs are more expensive per hour than Ev3-series, but they provide better performance per vCPU, reducing the number of VMs needed for the application tier. For example, fewer M-series VMs (e.g., M64s with 64 vCPUs) can handle the workload compared to more Ev3-series VMs (e.g., E32v3 with 32 vCPUs), lowering overall costs through efficiency. HANA Large Instances are overkill and far costlier for application workloads (designed for HANA databases, not app servers). Database: HANA on Azure (Large Instances) Why it’s correct: 20-TB Database: SAP HANA requires in-memory computing, and a 20-TB database needs massive RAM (e.g., at least 20 TB for data, plus additional for indexes and working memory). HANA on Azure (Large Instances) offers bare-metal servers with 2 TB to 24 TB of RAM (e.g., S960m with 20 TB RAM), specifically certified by SAP for large HANA deployments. Cost Minimization: For a 20-TB HANA database, HLI is the most cost-effective option among certified solutions. M-series VMs top out at 3.8 TB RAM (e.g., M128ms), requiring multiple VMs with clustering (e.g., scale-out), which increases costs (e.g., more VMs, storage, and complexity) and isn’t viable for 20 TB without significant overhead. HLI provides a single, optimized server at a fixed cost (e.g., ~$20-30/hour for S960m), avoiding the scaling and licensing costs of multiple VMs.
67
HOTSPOT - You are planning the deployment of a three-tier SAP landscape on Azure that will use SAP HANA. The solution must meet the following requirements: * Network latency between SAP NetWeaver and HANA must be minimized. * An SAP production landscape on Azure must be supported. * Network performance must be validated regularly. What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Deploy HANA and NetWeaver to: [Dropdown menu] An availability set An availability zone A proximity placement group Networking configuration: [Dropdown menu] Enable Write Accelerator Deploy ExpressRoute Direct Enable Accelerated Networking Validate network performance by using: [Dropdown menu] ABAPMeter Apache JMeter Network Performance Monitor
Final Answer Deploy HANA and NetWeaver to: A proximity placement group Networking configuration: Enable Accelerated Networking Validate network performance by using: Network Performance Monitor Deploy HANA and NetWeaver to: Options: An availability set: Groups VMs across fault domains in a single data center for HA (99.95% SLA), but doesn’t guarantee minimal latency between VMs. An availability zone: Distributes VMs across separate data centers in a region (99.99% SLA), increasing latency (e.g., 1-2 ms) due to physical separation. A proximity placement group (PPG): Co-locates VMs in the same data center to minimize latency, while still supporting HA configurations. Why “A proximity placement group” is Correct: Latency Minimization: PPG ensures NetWeaver and HANA VMs are physically close (e.g., same Azure data center), reducing network latency to sub-millisecond levels, critical for SAP HANA’s in-memory performance. Production Support: Compatible with HA setups (e.g., HANA system replication, NetWeaver clustering) within a single zone or set, meeting production SLAs. SAP Fit: Recommended by Microsoft for SAP HANA three-tier deployments to optimize app-DB communication. Networking configuration: Options: Enable Write Accelerator: Enhances write performance for Premium SSDs on M-series VMs, but it’s storage-related, not networking. Deploy ExpressRoute Direct: Provides private connectivity from on-premises to Azure, irrelevant for intra-Azure networking. Enable Accelerated Networking: Boosts VM network performance using SR-IOV, reducing latency and jitter. Why “Enable Accelerated Networking” is Correct: Latency Reduction: Improves network throughput and lowers latency between NetWeaver and HANA VMs (e.g., via faster packet processing), complementing PPG. Production Readiness: Supported on HANA-certified VMs (e.g., M-series), ensuring robust network performance for production workloads. SAP Fit: Standard for SAP deployments in Azure to optimize app-DB traffic. Validate network performance by using: Options: ABAPMeter: SAP tool for ABAP runtime performance (e.g., transaction times), not network-specific. Apache JMeter: Load testing tool for applications, not designed for Azure network monitoring. Network Performance Monitor (NPM): Part of Azure Monitor, measures network latency, packet loss, and performance between Azure resources. Why “Network Performance Monitor” is Correct: Network Validation: NPM (now integrated into Azure Network Watcher) monitors latency and performance between VMs (e.g., NetWeaver to HANA), meeting the “regular validation” requirement. Azure Integration: Native Azure tool, ideal for ongoing network health checks in a production SAP landscape. SAP Fit: Ensures network performance meets SAP HANA’s low-latency needs post-deployment.
68
HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements You must split data files and database logs between different Azure virtual disks to increase the database read/write performance ( ) Yes ( ) No Enabling Accelerate Networking on virtual NICs for all SAP servers will reduce network latency between the servers ( ) Yes ( ) No When you use SAP HANA on Azure (Large Instances), you should set the MTU on the primary network interface to match the MTU on SAP application servers to reduce CPU utilization and network latency ( ) Yes ( ) No
Correct Answers: You must split data files and database logs between different Azure virtual disks to increase the database read/write performance: Yes Enabling Accelerate Networking on virtual NICs for all SAP servers will reduce network latency between the servers: Yes When you use SAP HANA on Azure (Large Instances), you should set the MTU on the primary network interface to match the MTU on SAP application servers to reduce CPU utilization and network latency: Yes Statement 1: You must split data files and database logs between different Azure virtual disks to increase the database read/write performance. Analysis: Context: In SAP HANA (or other DBs), data files (e.g., /hana/data) and logs (e.g., /hana/log) have different I/O patterns—data files benefit from high throughput, logs from low latency. Azure Disks: Splitting data and logs across separate Azure disks (e.g., Premium SSD, Ultra Disk) allows independent I/O optimization (e.g., Write Accelerator for logs, high IOPS for data). This is a best practice to avoid contention and improve performance. Mandatory?: While highly recommended for production SAP HANA to meet performance KPIs (SAP Note 2529073), it’s not strictly “required” for all scenarios (e.g., small dev systems). The word “must” implies an absolute rule, but flexibility exists. AZ-120 Nuance: Exam context often treats this as a standard requirement for SAP HANA on Azure VMs. Answer: Yes Why Correct: Splitting data and logs is a Microsoft and SAP-recommended practice to enhance read/write performance on Azure, typically framed as a must for production SAP deployments. Statement 2: Enabling Accelerate Networking on virtual NICs for all SAP servers will reduce network latency between the servers. Analysis: Accelerated Networking (AN): A feature on Azure VMs (e.g., M-Series) that bypasses the host’s virtual switch, using SR-IOV to reduce latency, jitter, and CPU overhead for network traffic. SAP Servers: Includes application servers (e.g., NetWeaver) and DB servers (e.g., HANA). AN improves network performance between VMs, especially for high-throughput/low-latency needs like SAP HANA replication or app-DB communication. Impact: Reduces latency by offloading processing to the NIC, a measurable benefit in SAP landscapes. Caveat: Supported only on specific VM sizes and OSes, but the statement assumes applicability to “all SAP servers” in the deployment. Answer: Yes Why Correct: Enabling AN on supported VMs reduces network latency between SAP servers, a best practice for performance optimization (Azure SAP guide). Statement 3: When you use SAP HANA on Azure (Large Instances), you should set the MTU on the primary network interface to match the MTU on SAP application servers to reduce CPU utilization and network latency. Analysis: SAP HANA on Azure (Large Instances): Bare-metal servers with dedicated networking, connected to Azure VNets via ExpressRoute. MTU (Maximum Transmission Unit) defines the largest packet size on a network interface. MTU Matching: Mismatched MTUs between HANA Large Instances and application servers (VMs) can cause fragmentation, increasing CPU usage and latency. SAP recommends a consistent MTU (e.g., 1500 or 9000 for jumbo frames) across the SAP landscape. Large Instances Specifics: Default MTU is 1500, but Microsoft allows adjustment (e.g., to 9000) for performance. Matching the app servers’ MTU optimizes traffic, especially for high-volume DB-app communication. “Should” vs. “Must”: Recommended, not mandatory, but aligns with performance goals. Answer: Yes Why Correct: Matching MTU reduces fragmentation, lowering CPU and latency, a best practice for SAP HANA on Large Instances (SAP Note 2529073, Azure docs).
69
HOTSPOT - You have an SAP production landscape that uses SAP HANA databases. You configure a metric alert for the primary HANA server as shown in the following exhibit. Configure signal logic Percentage CPU (Platform) The percentage of allocated compute units that are currently in use by the Virtual Machine(s). Chart period Over the last 6 hours Alert logic Threshold: (Selected) Static Dynamic Operator: Greater than Aggregation type: Average Threshold value: 80% Condition preview Whenever the percentage CPU is greater than 80% Evaluated based on Aggregation granularity (Period): 15 minutes Frequency of evaluation: Every 5 Minutes You have an action group shown in the following exhibit. HANA Admins Short name HANA Admins Action group name HANA Admins Resource group default-activitylogalerts Subscription Corporate Subscription Actions Action group name* Action Type* Status Configure Action hanaadmins_email Email/SMS/Push/Voice Subscribed Edit details X amy_email Email/SMS/Push/Voice Subscribed Edit details X The amy_email is configured as shown in the following exhibit. Email/SMS/Push/Voice Add or edit an Email/SMS/Push/Voice action ✅ Email Email* 📧 amy@contoso.com ⬜ SMS (Carrier charges may apply) Country code ⬇️ 📞 1 Phone number 📞 1234567890 ⬜ Azure app Push Notifications Azure account email 📧 email@example.com ⬜ Voice Country code ⬇️ 📞 1 Phone number 📞 1234567890 Enable the common alert schema. Learn more 🔘 Yes 🔵 No 🔵 OK For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statements HANA Admins will be alerted by email if the server is at 85 percent for one minute, and then lowers to 40 percent ⭘ Yes ⭘ No HANA Admins will be alerted if the server is at 95 percent for 15 minutes ⭘ Yes ⭘ No Amy@contoso.com will be alerted by email if the server CPU cycles between 80 and 90 percent for 15 minutes ⭘ Yes ⭘ No
Final Answer HANA Admins will be alerted by email if the server is at 85 percent for one minute, and then lowers to 40 percent: No HANA Admins will be alerted if the server is at 95 percent for 15 minutes: Yes Amy@contoso.com will be alerted by email if the server CPU cycles between 80 and 90 percent for 15 minutes: No 1. HANA Admins will be alerted by email if the server is at 85 percent for one minute, and then lowers to 40 percent Answer: No Why it’s incorrect: Alert Logic: The alert triggers when the average CPU percentage over a 15-minute period exceeds 80%. The evaluation happens every 5 minutes, sliding the 15-minute window forward. Scenario Analysis: If the CPU is at 85% for just 1 minute and then drops to 40%: The 15-minute average would include 1 minute at 85% and 14 minutes at 40% (assuming a sudden drop). Average calculation: ( 85 × 1 + 40 × 14 ) ÷ 15 = ( 85 + 560 ) ÷ 15 = 645 ÷ 15 = 43 % (85×1+40×14)÷15=(85+560)÷15=645÷15=43%. 43% is below the 80% threshold, so no alert triggers. Action Group: Even if the threshold were met, HANA Admins (including hanaadmins_email) would be notified, but the condition isn’t satisfied here. Conclusion: The short 1-minute spike doesn’t sustain the 15-minute average above 80%, so no alert is sent. 2. HANA Admins will be alerted if the server is at 95 percent for 15 minutes Answer: Yes Why it’s correct: Alert Logic: The alert triggers if the average CPU percentage over 15 minutes exceeds 80%, evaluated every 5 minutes. Scenario Analysis: If the CPU is at 95% for a full 15 minutes: The average over the 15-minute period is 95%, which is greater than 80%. At the next 5-minute evaluation (e.g., after 5, 10, or 15 minutes), the condition is met, triggering the alert. Action Group: The HANA Admins group (including hanaadmins_email) is notified via email when the alert fires. Conclusion: A sustained 95% CPU for 15 minutes exceeds the threshold, triggering an email alert to HANA Admins. 3. Amy@contoso.com will be alerted by email if the server CPU cycles between 80 and 90 percent for 15 minutes Answer: No Why it’s incorrect: Alert Logic: The alert triggers based on the average CPU percentage over 15 minutes being greater than 80%. “Cycles between 80 and 90 percent” implies fluctuating values (e.g., 80%, 90%, 85%, etc.). Scenario Analysis: If the CPU fluctuates between 80% and 90% for 15 minutes, the average depends on the specific values. For simplicity, assume an even distribution (e.g., 85% average): Average of 80% to 90% could be around 85% (e.g., ( 80 + 90 ) ÷ 2 = 85 (80+90)÷2=85% as a midpoint). 85% is greater than 80%, so the alert would trigger in this case. However: The phrasing “cycles between 80 and 90 percent” suggests variability, and the question may imply a trick. If the average dips below 80% due to lower spikes (e.g., 80% half the time, 70% half the time = 75%), no alert triggers. For AZ-120, “cycles” often tests precise threshold understanding, and without exact values, the conservative interpretation is that the average might not consistently exceed 80%. Action Group: If the alert triggers, amy@contoso.com (part of HANA Admins) gets an email. But the ambiguity in “cycles” and exam context leans toward a “No” if the average isn’t guaranteed above 80%. Conclusion: Without a guaranteed average above 80% (e.g., if it averages 79% due to lower cycles), no alert is sent, making “No” the safer answer.
70
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You deploy SAP HANA on Azure (Large Instances). You need to back up the SAP HANA database to Azure. Solution: Back up directly to disk, copy the backups to an Azure virtual machine, and then copy the backup to an Azure Storage account. Does this meet the goal? A Yes B No
B. No
71
You have an SAP landscape on Azure that contains the virtual machines shown in the following table. You need to ensure that the Application Server role is available if a single Azure datacenter fails. What should you include in the solution? A Azure Basic Load Balancer B Azure Load Balancer Standard C Azure Virtual WAN D Azure Application Gateway v1
Correct Answer: B. Azure Load Balancer Standard Why This Answer Is Correct for AZ-120: Zone-Redundant HA: The Standard Load Balancer supports Availability Zones, balancing traffic between SAPAPP1 (Zone 1) and SAPAPP2 (Zone 2). If Zone 1 fails, it redirects traffic to Zone 2, ensuring the Application Server role remains available. Configured with a zone-redundant frontend IP, backend pool (both VMs), and health probes. SAP Context: Microsoft recommends the Standard Load Balancer for SAP HA across zones (e.g., for NetWeaver or HANA clusters), complementing SAP Web Dispatcher for HTTP traffic (Azure SAP workload guide).
72
HOTSPOT - You are implementing a highly available deployment of SAP HANA on Azure virtual machines. You need to ensure that the deployment meets the following requirements: * Supports host auto-failover * Minimizes cost How should you configure the highly available components of the deployment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area HANA database and log volumes: 🔽 NFSv3 volumes NFSv4 volumes Premium SSD disks I/O fencing: 🔽 NFSv3 NFSv4 An SBD device
Final Answer HANA database and log volumes: Premium SSD disks I/O fencing: An SBD device 1. HANA database and log volumes: Premium SSD disks Why it’s correct: Requirement: The HANA database (/hana/data) and log (/hana/log) volumes need high-performance storage with low latency and high IOPS to meet SAP HANA’s stringent requirements (e.g., 250 MB/s write throughput for logs). Premium SSD Disks: Performance: Premium SSDs provide high IOPS (up to 20,000) and low latency, certified by SAP for HANA data and log volumes on Azure VMs. HA Support: In an HA setup with host auto-failover (e.g., using SUSE or Red Hat Pacemaker), each node typically has its own local Premium SSDs for data and logs, with HANA System Replication (HSR) handling data synchronization between primary and standby nodes. HSR ensures the standby node’s disks are updated, enabling failover without shared storage. Cost: Premium SSDs are cheaper than NFS-based solutions like Azure NetApp Files (ANF) for a two-node HA setup. For example, two P30 disks (1 TB, 5,000 IOPS each) cost $150/month each, versus ANF’s higher base cost plus throughput charges ($0.10/GB/month + capacity). SAP on Azure: Microsoft recommends Premium SSDs with HSR for cost-effective HANA HA on VMs, especially for smaller deployments or when minimizing cost is key (e.g., vs. scale-out or ANF). Why other options are incorrect: NFSv3 volumes: Implies a shared storage solution (e.g., Azure NetApp Files with NFSv3). While ANF supports HANA HA and failover (e.g., shared /hana/data and /hana/log), it’s more expensive due to capacity and throughput pricing, conflicting with cost minimization. NFSv4 volumes: Azure NetApp Files supports NFSv4.1, also viable for HA, but similarly costly and less common for VM-based HANA HA compared to HSR with local disks. 2. I/O fencing: An SBD device Why it’s correct: Requirement: I/O fencing prevents split-brain scenarios in a cluster, ensuring only one node is active during failover. Host auto-failover relies on a cluster manager (e.g., Pacemaker) to detect failures and fence the failed node. SBD (Storage-Based Death) Device: Function: An SBD device is a shared storage target (e.g., a small iSCSI disk) used by Pacemaker to fence a node by writing a “poison pill” if it fails, ensuring the standby takes over cleanly. Azure Implementation: On Azure, SBD is implemented using a small Azure Disk (e.g., 1 GB Standard SSD) exposed via an iSCSI target (e.g., Azure Load Balancer or a fencing VM). It’s a standard fencing mechanism for SUSE Linux Enterprise Server (SLES) HA clusters with HANA. Cost: An SBD device is inexpensive (e.g., ~$1-2/month for a tiny disk), meeting the cost minimization goal compared to complex alternatives. SAP on Azure: Microsoft’s SAP HANA HA guide for Azure VMs recommends SBD with Pacemaker for fencing, as it’s lightweight and effective for two-node clusters.
73
You are designing an SAP on Azure production landscape. The landscape must ensure service availability in the event of an Azure datacenter failure. What should you include in the design? A an availability zone B a fusion group C an availability set D a proximity placement group
Final Answer What should you include in the design? A. An availability zone Why “An availability zone” is Correct? Datacenter Failure Resilience: Zones distribute the SAP landscape across separate facilities in a region, ensuring service continuity if one datacenter fails (e.g., HANA replication to a secondary zone). SAP Production Support: Meets high SLAs (99.99%) and aligns with SAP HA/DR designs (e.g., HANA system replication, NetWeaver clustering), critical for production. Azure Best Practice: Microsoft recommends Availability Zones for SAP production landscapes to handle datacenter-level faults, unlike sets (intra-datacenter) or PPGs (latency-focused).
74
You plan to deploy an SAP production landscape on Azure. You need to minimize latency between SAP HANA database servers and SAP NetWeaver servers. What should you implement? A Azure Private Link B a virtual machine scale set C a proximity placement group D an Availability Set
C. a proximity placement group Why is this correct? A proximity placement group (PPG) is an Azure feature that ensures virtual machines (VMs) are physically located close to each other within the same Azure data center. This reduces network latency between the VMs, which is critical for SAP workloads like SAP HANA and SAP NetWeaver, where low-latency communication between the database and application servers is essential for optimal performance.
75
DRAG DROP - You have an Azure virtual machine named VM1 that runs SUSE Linux Enterprise Server (SLES) and hosts an SAP NetWeaver application server. You need to install the Azure VM extension for SAP solutions on VM1. Which three actions should you perform in sequence? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order. Actions From Azure Cloud Shell, run az extension add. From Azure Cloud Shell, run az vm aem set. On VM1, restart the SAP Host Agent. On VM1, run curl http://127.0.0.1:11812/azure4sap/metrics. From Azure Cloud Shell, run az login. Answer Area
Final Answer Answer Area: From Azure Cloud Shell, run az login. From Azure Cloud Shell, run az extension add. From Azure Cloud Shell, run az vm aem set. Why Correct? Logical Order: Authentication (login) → Tool preparation (extension add) → Extension deployment (aem set). Goal Achievement: Installs the Azure VM extension for SAP solutions on VM1, enabling SAP monitoring as required. SLES Compatibility: Uses Azure CLI from Cloud Shell, suitable for managing a SLES-based SAP VM. Exam Fit: Matches Microsoft’s documented process for SAP extension deployment in AZ-120 scenarios.
76
DRAG DROP - You need to deploy an SAP production landscape on Azure. The solution must be supported by the SAP production landscape and must minimize costs. Which Azure virtual machine series should you use for each SAP workload? To answer, drag the appropriate series to the correct workloads. Each series may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Azure virtual machine series B-Series D-Series M-Series N-Series Answer Area SAP Central Services (SCS): [Blank] SAP HANA: [Blank]
Final Answer SAP Central Services (SCS): D-Series SAP HANA: M-Series . SAP Central Services (SCS): D-Series Why it’s correct: Performance Needs: SCS (ASCS/ERS) handles message services and enqueue operations, requiring moderate CPU and memory resources but not the high memory or specialized hardware of HANA. D-Series VMs provide a balanced compute profile suitable for this workload. SAP Certification: D-Series VMs (e.g., D2s_v3, D4s_v3) are certified by SAP for SAP application servers, including SCS, per SAP Note 1928533. Cost Minimization: D-Series is less expensive than M-Series (e.g., D4s_v3 ~$0.20/hour vs. M32ts ~$1.50/hour in East US). It meets SCS needs without over-provisioning. B-Series is cheaper (~$0.05/hour for B2s) but designed for burstable, non-production workloads and isn’t SAP-certified for production due to inconsistent baseline performance. Azure Context: Microsoft’s SAP on Azure guides recommend D-Series for SCS in production landscapes when cost is a factor, often paired with Availability Sets or Zones for HA. SAP HANA: M-Series Why it’s correct: Performance Needs: SAP HANA is an in-memory database requiring substantial RAM (e.g., 1-4 TB depending on database size) and strong CPU performance for production workloads. SAP Certification: M-Series VMs (e.g., M32ts, M64s) are certified by SAP for HANA on Azure VMs (SAP Note 1943937), supporting up to 3.8 TB RAM per VM, suitable for production-scale HANA databases (smaller than HANA Large Instances’ 6+ TB). Cost Minimization: M-Series is the most cost-effective VM series certified for HANA on Azure VMs. For example, an M32ts ($1.50/hour) is cheaper than scaling multiple D-Series VMs or using HANA Large Instances ($10-20/hour for 6 TB+). D-Series lacks the memory capacity for production HANA (max 256 GB), requiring unsupported clustering for larger databases, increasing cost and complexity. Azure Context: Microsoft recommends M-Series for SAP HANA on VMs in production when cost is a concern and database size fits within VM limits (vs. HANA Large Instances for 6+ TB).
77
You have an SAP landscape on Azure that contains the virtual machines shown in the following table. Name Role Azure Availability Zone in East US SAPAPP1 Application Server Zone 1 SAPAPP2 Application Server Zone 2 You need to ensure that the Application Server role is available if a single Azure datacenter fails. What should you include in the solution? A Azure Basic Load Balancer B Azure Load Balancer Standard C Azure Private Link D Azure AD Application proxy
Final Answer What should you include in the solution? B. Azure Load Balancer Standard Why “Azure Load Balancer Standard” is Correct? Datacenter Failure Protection: Zone-redundant Standard Load Balancer ensures traffic shifts from a failed zone (e.g., Zone 1) to the surviving zone (e.g., Zone 2), keeping the Application Server role available. SAP HA Design: Aligns with SAP NetWeaver HA patterns in Azure, where app servers are load-balanced across zones for production resilience (e.g., 99.99% SLA). Cross-Zone Support: Unlike Basic, Standard supports VMs in different Availability Zones, critical for this multi-zone setup. AZ-120 Relevance: Reflects the exam’s focus on Standard Load Balancer for SAP HA across zones, distinguishing it from Basic (single-zone) or unrelated options.
78
You have an Azure subscription that contains an SAP landscape. The landscape uses Azure AD user authentication. You need to configure single sign-on (SSO) authentication for SAP HANA and SAP Cloud Platform. The solution must support conditional access policies. What should you configure? A Windows Authentication B Azure AD Identity Protection C LDAP D SAP Cloud Platform Identity Authentication
Final Answer D. SAP Cloud Platform Identity Authentication Why SAP Cloud Platform Identity Authentication Is Correct SAP Cloud Platform Identity Authentication (IAS): Overview: IAS is SAP’s cloud-based identity provider (IdP) service that supports SSO using SAML 2.0, OpenID Connect, and other protocols. It acts as a proxy or intermediary between Azure AD and SAP systems. SSO for SAP HANA: SAP HANA supports SAML 2.0 for SSO (since SPS 10). IAS can integrate with Azure AD as the corporate IdP, authenticating users and issuing SAML assertions to SAP HANA (configured via HANA Cockpit or XS Admin). SSO for SAP Cloud Platform: SAP Cloud Platform natively integrates with IAS for identity management. IAS enables SSO across SCP applications (e.g., Fiori, SAP BTP services) by trusting Azure AD as the upstream IdP. Azure AD Integration: IAS connects to Azure AD via SAML or OpenID Connect, allowing Azure AD to authenticate users. You configure Azure AD as an enterprise application in IAS, enabling seamless SSO. Conditional Access: Azure AD’s conditional access policies (e.g., requiring MFA for external access) apply to IAS-integrated applications. When users authenticate via Azure AD, policies are enforced before IAS issues tokens to SAP HANA and SCP, meeting the requirement. Solution Fit: IAS bridges Azure AD with both SAP HANA and SAP Cloud Platform, providing a unified SSO experience while leveraging Azure AD’s security features like conditional access. SAP on Azure: Microsoft and SAP recommend IAS for hybrid and cloud SAP deployments with Azure AD, especially when SSO and advanced policies are needed.
79
HOTSPOT - You have an on-premises deployment of SAP HANA that contains a production environment and a development environment. You plan to migrate both environments to Azure. You need to identify which Azure virtual machine-series to use for each environment. The solution must meet the following requirements: * Minimize costs. * Be SAP HANA-certified. What should you identify for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area SAP HANA Developer Edition: [Dropdown] D-series M-series NC-series SAP S/4 HANA: [Dropdown] D-series M-series NC-series
Final Answer SAP HANA Developer Edition: D-series (cost-effective and SAP HANA-certified for non-production). SAP S/4 HANA: M-series (SAP HANA-certified for production and optimized for performance). SAP HANA Developer Edition: D-series Requirement: Minimize costs The development environment typically has lower performance and resource demands compared to production. It’s used for testing, development, or non-critical workloads, so cost efficiency is a priority. The D-series VMs are general-purpose VMs in Azure, offering a balance of compute, memory, and storage at a lower cost compared to specialized series like M-series or NC-series. They are suitable for workloads that don’t require high memory or compute-intensive resources. Requirement: SAP HANA-certified While D-series VMs are not typically certified for SAP HANA production workloads, certain D-series VMs (like Dsv3 or Dsv4) can be used for non-production SAP HANA environments, such as development or test systems, as per Microsoft’s documentation. SAP HANA certification for non-production environments is less stringent, and D-series VMs meet these requirements for smaller-scale deployments. For example, Microsoft’s SAP on Azure documentation indicates that D-series VMs can support SAP HANA for development purposes with appropriate sizing (e.g., smaller databases). SAP S/4 HANA: M-series Requirement: SAP HANA-certified SAP S/4 HANA is a production environment that relies on SAP HANA as its in-memory database. Production workloads have strict performance, scalability, and reliability requirements. The M-series VMs are memory-optimized and specifically certified by SAP and Microsoft for running SAP HANA in production environments. They provide high memory capacity (up to 12 TB in some cases) and are designed for large-scale, mission-critical SAP HANA databases, such as those used by SAP S/4 HANA. Microsoft’s SAP on Azure certification lists M-series VMs (e.g., M-family, Mv2, or Msv2) as the primary choice for SAP HANA production deployments due to their ability to handle large in-memory databases and high transaction volumes. Requirement: Minimize costs While M-series VMs are more expensive than D-series, the requirement to minimize costs must be balanced with the need for SAP HANA certification and production-grade performance. M-series is the most cost-effective option among SAP HANA-certified VMs for production, as it avoids over-provisioning with even more specialized (and costly) series like Esv3 or custom configurations. Within the M-series, you can select specific VM sizes (e.g., M32ts) to further optimize costs based on the SAP S/4 HANA workload size, but the series itself is necessary for certification.
80
HOTSPOT - You have an Azure AD tenant named contoso.com that syncs to an Active Directory domain hosted on an Azure virtual machine. You plan to deploy an SAP NetWeaver landscape on Azure that will use SUSE Linux Enterprise Server (SLES). You need to recommend an authentication solution for the following scenarios. The solution must support Azure Multi-Factor Authentication (MFA): * Administrators sign in to SLES Azure virtual machines. * A user signs in to an SAP NetWeaver application. What should you recommend for each scenario? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Administrators sign in to SLES Azure virtual machines: [Dropdown] Active Directory Azure AD Azure Active Directory Domain Services (Azure AD DS) A user signs in to an SAP NetWeaver application: [Dropdown] Active Directory Azure AD Azure Active Directory Domain Services (Azure AD DS)
Final Answer Administrators sign in to SLES Azure virtual machines: Azure Active Directory Domain Services (Azure AD DS) A user signs in to an SAP NetWeaver application: Azure AD Administrators sign in to SLES Azure virtual machines: Azure Active Directory Domain Services (Azure AD DS) Why it’s correct: Requirement: Admins need to log into SLES VMs (e.g., via SSH), requiring a directory service that integrates with Linux and supports MFA. Azure AD DS: Function: Azure AD DS provides a managed domain service that extends Azure AD with traditional AD features (e.g., LDAP, Kerberos, NTLM) without managing domain controllers. SLES Integration: SLES VMs can join an Azure AD DS domain, enabling admins to authenticate using AD credentials (synced from Azure AD) via Kerberos or LDAP over SSH. MFA Support: Azure AD DS inherits MFA from Azure AD. When admins sign in (e.g., via SSH with a client like PuTTY supporting Kerberos), Azure AD enforces MFA if configured (e.g., via conditional access policies applied to the VM’s management endpoint or a VPN). Azure Context: The AD domain on an Azure VM can sync to Azure AD, and Azure AD DS can be enabled to provide domain services to SLES VMs, leveraging the hybrid setup. SAP on Azure: Microsoft recommends Azure AD DS for Linux VM authentication in SAP landscapes when MFA and domain join are needed. A user signs in to an SAP NetWeaver application: Azure AD Why it’s correct: Requirement: Users need SSO to SAP NetWeaver (e.g., SAP GUI, Fiori), with MFA support. Azure AD: Function: Azure AD is a cloud-based IdP that supports SAML 2.0 and Kerberos SSO, ideal for SAP NetWeaver applications. SAP NetWeaver Integration: For ABAP-based NetWeaver, SAML SSO is configured with Azure AD as the IdP (e.g., via enterprise application registration), allowing users to sign in with Azure AD credentials. Kerberos SSO is possible with additional setup (e.g., Azure AD DS or AD FS), but SAML is more common and direct. MFA Support: Azure AD natively supports MFA through conditional access policies (e.g., requiring MFA for SAP access), meeting the requirement seamlessly. Hybrid Context: Users synced from the AD domain on the Azure VM to Azure AD can access NetWeaver via SSO, leveraging existing credentials. SAP on Azure: Microsoft’s SAP NetWeaver SSO guides recommend Azure AD with SAML for application access, especially in hybrid setups.
81
HOTSPOT - You have an on-premises SAP NetWeaver deployment that runs SUSE Linux Enterprise Server (SLES). The deployment contains 200 GB of files used by application servers stored in an NFS share. You plan to migrate the on-premises deployment to Azure. You need to implement an NFS storage solution. The solution must meet the following requirements: * Ensure that only the application servers can access the storage. * Support NFS 4.1 * Minimize costs. What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Azure service: [Dropdown] Azure Blob storage Azure Files Azure NetApp Files Access control mechanism: [Dropdown] Azure AD authentication A private endpoint A shared access signature (SAS) token
Final Answer Azure service: Azure NetApp Files Access control mechanism: A private endpoint Why “Azure NetApp Files” is Correct: NFS 4.1: Native support meets the requirement. Cost: For a 200 GB share, NetApp Files’ enterprise features and pooling make it cost-competitive in an SAP context (vs. Azure Files’ higher per-GB rate for small shares). SAP Production: Preferred for SAP on Azure due to performance and certification. Why “A private endpoint” is Correct: Exclusive Access: Limits NFS access to the app servers’ VNet, meeting the security requirement. NFS Compatibility: Works with Azure NetApp Files (and Azure Files NFS), supporting NFS 4.1 mounts. Cost: Low-cost addition to the storage solution.
81
You have an on-premises SAP NetWeaver landscape that contains an IBM DB2 database. You need to migrate the database to a Microsoft SQL Server instance on an Azure virtual machine. Which tool should you use? A Data Migration Assistant B SQL Server Migration Assistant (SSMA) C Azure Migrate D Azure Database Migration Service
Final Answer Recommendation: B. SQL Server Migration Assistant Reasoning: Specific Support for DB2: SSMA is designed for heterogeneous migrations and explicitly supports IBM DB2 as a source database, making it a precise fit for converting DB2 schema and data to SQL Server. SAP Compatibility: In SAP migrations to SQL Server, SSMA is commonly used because it handles database objects and data structures relevant to SAP NetWeaver, ensuring compatibility with SAP’s SQL Server requirements (e.g., as per SAP Note 1928533). Azure VM Target: SSMA supports SQL Server on Azure VMs as a target, aligning with the question’s requirement to migrate to a SQL Server instance on an Azure VM. Ease of Use: SSMA automates schema conversion, data migration, and validation, reducing manual effort compared to DMS, which may require more setup for DB2-to-SQL Server in an SAP context. AZ-120 Relevance: The AZ-120 exam tests knowledge of tools for SAP database migrations on Azure. SSMA is frequently highlighted in Microsoft’s SAP-on-Azure documentation for DB2 migrations due to its specialized capabilities.
82
HOTSPOT - You plan to migrate an SAP database from Oracle to Microsoft SQL Server by using the SQL Server Migration Assistant (SSMA). You are configuring a Proof of Concept (PoC) for the database migration. You plan to perform the migration multiple times as part of the PoC. You need to ensure that you can perform the migrations as quickly as possible. The solution must ensure that all Oracle schemas are migrated. Which migration method and migration mode should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Migration method: [Dropdown] Script Synchronization Migration mode: [Dropdown] Full Default Optimistic
Correct Answers: Migration method: Synchronization Migration mode: Default Why They’re Correct: Synchronization: Automates schema conversion and data migration in one step, minimizing manual effort and time for multiple PoC runs. Scripts require manual execution, slowing repeated migrations, while Synchronization leverages SSMA’s direct connectivity for efficiency. Meets the "quickly as possible" goal by reducing steps. Default: Balances speed and completeness, converting all Oracle schemas with moderate validation. Full ensures maximum accuracy but is slower, while Optimistic risks missing schemas, violating the "all schemas" requirement. Default is practical for a PoC, where speed matters and minor issues can be addressed later. Aligns with both speed and completeness goals.
83
HOTSPOT - You have an on-premises SAP ERP Central Component (SAP ECC) deployment on servers that run Windows Server 2016 and have Microsoft SQL Server 2016 installed. You plan to migrate the deployment to Azure. You need to identify which migration method and migration option to use. The solution must minimize downtime of the SAP ECC deployment. What should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area Migration method: [Dropdown] Classical migration: SAP Database Migration Option (DMO): SAP Database Migration Option (DMO) with System Move: Migration option: [Dropdown] Parallel Parallel export/import Sequential unload and load Serial
Final Answer: Migration method: SAP Database Migration Option (DMO) with System Move. Migration option: Parallel export/import
84
You have an SAP landscape that is hosted on VMWare. You plan to migrate an existing SAP landscape to Azure by using Azure Migrate. You need to configure firewall rules to allow access to the Azure Migrate appliance management app. To which port should you provide access? A 3900 B 44368 C 44400 D 50014
Final Answer Recommendation: C. 44400 (with caveat) Why Chosen (Caveat): The correct port is 443, but since it’s not an option, 44400 is selected as the least implausible choice due to its high-numbered nature, though it’s not accurate. The question likely contains an error, as Azure Migrate’s management app uses port 443 for VMware-based SAP migrations, per AZ-120 exam standards.
85
You have an on-premises SAP NetWeaver deployment that uses Windows Server 2016 and Microsoft SQL Server 2016. You need to migrate the deployment to an Azure virtual machine that runs Windows Server 2016 and has Microsoft SQL Server 2019 installed. Which migration method should you use? A lift-and-shift B Azure Migrate C classical SAP Database Migration Option (DMO) D heterogeneous SAP classical migration
Correct Answer: C. Classical SAP Database Migration Option (DMO) Why It’s Correct: Classical SAP DMO: Designed for SAP system migrations, including database upgrades within the same DB type (SQL Server 2016 → 2019). Combines the migration to Azure (target VM with Windows Server 2016 and SQL Server 2019) and the DB upgrade in one process, minimizing steps and downtime. Steps: Set up an Azure VM with Windows Server 2016 and SQL Server 2019. Use DMO to export the SAP system from on-premises, upgrade the DB to 2019, and import it to Azure. Meets the goal of migrating the deployment to the specified target environment.
86
You have an existing SAP landscape on Azure. All SAP virtual machines are on the same virtual network. The SAP application servers, SAP management servers, and SAP database servers are each on their own subnet. You need to ensure that only the application and management servers can access the subnet to which the database servers connect. What should you configure? A Azure AD service principals B Azure Key Vault secrets C network security groups (NSGs) D Azure Application Gateway and firewall rules
Final Answer: C. Network security groups (NSGs) Why NSGs Are Correct: Purpose: NSGs provide granular control over network traffic by defining rules based on source/destination IP addresses, ports, and protocols. In this scenario, you can apply an NSG to the database servers’ subnet with rules like: Allow: Inbound traffic from application servers’ subnet (e.g., 10.0.1.0/24) to database subnet (e.g., 10.0.3.0/24) on port 1433 (SQL Server). Allow: Inbound traffic from management servers’ subnet (e.g., 10.0.2.0/24) to database subnet on port 1433. Deny: All other inbound traffic to the database subnet (implicit or explicit deny rule). This ensures only the application and management servers can access the database subnet. AZ-120 Alignment: The exam emphasizes securing SAP landscapes on Azure using native networking tools. NSGs are the standard Azure solution for subnet-level access control, especially in SAP architectures (e.g., hub-and-spoke or multi-subnet VNets). Fit: NSGs directly address the requirement by isolating the database subnet while allowing access from specific subnets, ensuring security and compliance for SAP workloads.