Practice Questions - Microsoft AZ-500 Flashcards

(495 cards)

1
Q

** View Question

HOTSPOT -
What is the membership of Group1 and Group2? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
(Image shows a screenshot of a multiple choice question with two boxes: Box 1: Group1 (rule: user.displayName -contains "ON") and Box 2: Group2 (rule: user.displayName -match "*on") )

Scenario:
Contoso.com contains the users shown in the following table.
(Image shows a table of users with their display names: User1: Montreal, User2: MONTREAL, User3: London, User4: Ontario)
Contoso.com contains the security groups shown in the following table.
(Image shows a table of groups, Group1 and Group2 with their membership rules)

**

A

**

  • Group 1: User1, User2, User3, and User4. The rule user.displayName -contains "ON" is case-insensitive and checks if the substring “ON” exists within the displayName. All four users’ display names contain this substring.
  • Group 2: User3. The rule user.displayName -match "*on" is intended to find display names ending with “on”. However, there’s a debate surrounding the correctness of this expression. Some argue that *on is an incomplete regex and that a correct regex should be .*on. Only “London” (User3) ends with “on”.

Explanation of other options and the disagreement:

The discussion highlights a significant disagreement regarding the validity of the regular expression "*on" in Group 2’s membership rule. Many commenters correctly point out that the * wildcard only matches the preceding character, zero or more times. This means it wouldn’t match any text ending in “on”. A correctly functioning regular expression would be ".*on", using the . wildcard to match any character (zero or more times) before “on”. Because the question uses "*on", and many commenters agree it is a flawed expression, the answer for Group 2 is based on the literal interpretation of that flawed expression. If the expression were corrected to ".*on", the result for Group 2 would be the same.

The discussion shows that the provided answer is based on interpreting the question as written, even if the regex is considered faulty. A more robust question would have used a correct regular expression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

View Question
You need to ensure that the audit logs from the SQLdb1 Azure SQL database are stored in the WS12345678 Azure Log Analytics workspace. To complete this task, sign in to the Azure portal and modify the Azure resources.

A

To ensure audit logs from the SQLdb1 Azure SQL database are stored in the WS12345678 Azure Log Analytics workspace, follow these steps:

  1. Access the Azure portal and navigate to the SQLdb1 database. This can be done through searching for “SQL databases” or browsing to them in the left navigation pane.
  2. In SQLdb1’s properties, locate the “Security” section and select “Auditing.”
  3. Enable auditing if it’s not already enabled. Select the “Log Analytics” checkbox and click “Configure.”
  4. Choose the WS12345678 Azure Log Analytics workspace from the provided list.
  5. Save the changes.

While some users suggest alternative methods (auditing at the server level or using Diagnostic Settings), the question explicitly focuses on configuring auditing at the database level for SQLdb1 to send logs to the specified Log Analytics workspace. Therefore, using the database-level auditing is the most direct and appropriate method to achieve the goal as stated in the question. The provided steps directly address database-level auditing. There is a consensus among users that the database-level approach is correct for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

View Question You have an Azure subscription named Sub1. You have an Azure Storage account named sa1 in a resource group named RG1. Users and applications access the blob service and the file service in sa1 by using several shared access signatures (SASs) and stored access policies. You discover that unauthorized users accessed both the file service and the blob service. You need to revoke all access to sa1. Solution: You generate new SASs. Does this meet the goal?
A. Yes
B. No

A

B. No. Generating new SASs does not revoke access granted by existing SASs. The existing SAS URLs will continue to function, allowing unauthorized access. To revoke access, you need to delete the existing SASs or regenerate the storage account keys used to create them. This will invalidate all SASs created with the old keys.

The overwhelming consensus in the discussion supports answer B. While the question presents a solution of generating new SASs, the discussion clearly points out that this alone is insufficient to revoke existing access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

** View Question

You have an Azure subscription that contains a resource group named RG1 and the network security groups (NSGs) shown in the following table.

You create and assign the Azure policy shown in the following exhibit.

What is the flow log status of NSG1 and NSG2 after the Azure policy is assigned?
A. Flow logs will be enabled for NSG1 only.
B. Flow logs will be enabled for NSG2 only.
C. Flow logs will be enabled for NSG1 and NSG2.
D. Flow logs will be disabled for NSG1 and NSG2.

**

A

** D. Flow logs will be disabled for NSG1 and NSG2.

The Azure policy shown in the image has an effect of “Audit”. An audit effect only logs a warning if a resource is non-compliant; it does not automatically change the resource’s configuration. Since the policy is in audit mode and there’s no remediation task to enable flow logs, the flow log status for both NSG1 and NSG2 remains unchanged. The discussion confirms this understanding.

WHY OTHER OPTIONS ARE INCORRECT:

  • A. Flow logs will be enabled for NSG1 only: Incorrect because the policy targets NSG2 and excludes NSG1. Even if the effect were “DeployIfNotExists”, NSG1 would remain unaffected.
  • B. Flow logs will be enabled for NSG2 only: Incorrect because the policy’s effect is “Audit,” meaning it only monitors compliance and doesn’t enforce changes.
  • C. Flow logs will be enabled for NSG1 and NSG2: Incorrect for the same reason as option B; the audit effect does not automatically enable flow logs.

NOTE: The consensus in the discussion points to answer D. There is no dissenting opinion presented.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

** View Question Your on-premises network contains a Hyper-V virtual machine named VM1. You need to use Azure Arc to onboard VM1 to Microsoft Defender for Cloud. What should you install first?
A. the guest configuration agent
B. the Azure Monitor agent
C. the Log Analytics agent
D. the Azure Connected Machine agent

**

A

** D. the Azure Connected Machine agent

Explanation: The Azure Connected Machine agent is the correct answer because it’s the prerequisite for onboarding an on-premises machine to Azure Arc. Azure Arc enables the management and governance of on-premises machines from Azure. Only after the Azure Connected Machine agent is installed can the machine connect to Azure Arc and subsequently be integrated with Microsoft Defender for Cloud. The other agents are not directly involved in the initial onboarding process to Azure Arc.

Why other options are incorrect:

  • A. the guest configuration agent: While used for managing configurations on VMs, it’s not the primary agent for connecting to Azure Arc.
  • B. the Azure Monitor agent: This agent is for collecting monitoring data, not for initially connecting the machine to Azure Arc.
  • C. the Log Analytics agent: While Log Analytics is used for data collection and is relevant to Defender for Cloud, the machine must first be connected via the Azure Connected Machine agent before Log Analytics data can be sent. One user suggested this as the answer, but the consensus from the discussion strongly favors D.

Note: There is a disagreement among users regarding option C, with one user suggesting that Log Analytics agent is the correct answer. However, the majority of the discussion and the suggested answer strongly point towards option D as the correct choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

View Question You have an Azure subscription that contains a Microsoft Defender External Attack Surface Management (Defender EASM) resource named EASM1. EASM1 has discovery enabled and contains several inventory assets. You need to identify which inventory assets are vulnerable to the most critical web app security risks. Which Defender EASM dashboard should you use?
A. Security Posture
B. OWASP Top 10
C. Attack Surface Summary
D. GDPR Compliance

A

The correct answer is B. OWASP Top 10. The OWASP Top 10 dashboard in Defender EASM specifically focuses on the most critical web application security risks as defined by the Open Web Application Security Project (OWASP). This aligns directly with the question’s requirement to identify assets vulnerable to these risks.

Why other options are incorrect:

  • A. Security Posture: This dashboard provides a general overview of the security posture of your assets, not specifically focusing on web application vulnerabilities.
  • C. Attack Surface Summary: This gives a broad summary of your attack surface, but doesn’t prioritize risks based on the OWASP Top 10.
  • D. GDPR Compliance: This dashboard is unrelated to web application security risks.

Note: The discussion shows a strong consensus that the answer is B, with multiple users selecting and explaining this choice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

View Question You have an Azure subscription that uses Microsoft Defender for Cloud. You need to use Defender for Cloud to review regulatory compliance with the Azure CIS 1.4.0 standard. The solution must minimize administrative effort. What should you do first?
A. Assign an Azure policy.
B. Disable one of the Out of the box standards.
C. Manually add the Azure CIS 1.4.0 standard.
D. Add a custom initiative.

A

C. Manually add the Azure CIS 1.4.0 standard.

The Azure CIS 1.4.0 standard is not included by default in Microsoft Defender for Cloud’s regulatory compliance offerings. Before you can assign a policy or take any other action related to this standard, you must first add it. The other options are incorrect because they presume the standard is already present and configured. Assigning a policy (A) or disabling an existing standard (B) are actions that come after adding the desired standard. Creating a custom initiative (D) is unnecessary given that the Azure CIS 1.4.0 standard is readily available.

The discussion shows some disagreement on the exact terminology (“security policy” vs. “Azure policy”) but the core consensus points to the need to add the standard before any assignment or configuration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

View Question You have an Azure subscription that contains an Azure key vault named Vault1 and a virtual machine named VM1. VM1 is connected to a virtual network named VNet1. You need to allow access to Vault1 only from VM1. What should you do in the Networking settings of Vault1?
A. From the Firewalls and virtual networks tab, add the IP address of VM1.
B. From the Private endpoint connections tab, create a private endpoint for VM1.
C. From the Firewalls and virtual networks tab, add VNet1.
D. From the Firewalls and virtual networks tab, set Allow trusted Microsoft services to bypass this firewall to Yes for Vault1.

A

The correct answer is A. From the Firewalls and virtual networks tab, add the IP address of VM1.

This is the most precise and secure way to limit access to Vault1 only from VM1. By adding the specific IP address of VM1 to the firewall rules, only that machine will be able to connect.

Option B is incorrect because while private endpoints offer enhanced security, the question specifically asks what to do within Vault1’s networking settings. Creating a private endpoint involves configuring the virtual network and not directly within the Key Vault’s settings. Furthermore, the discussion highlights that there isn’t a VM option directly available during private endpoint creation.

Option C is incorrect because adding VNet1 would grant access to all resources within that virtual network, not just VM1, violating the requirement of only allowing VM1 access.

Option D is incorrect because enabling “Allow trusted Microsoft services to bypass this firewall” is a broad permission that would not restrict access to only VM1 and potentially exposes the Key Vault unnecessarily.

There is some disagreement in the discussion regarding the best approach. Some users suggest using a Virtual Network (option C) if the VM isn’t publicly accessible, but the consensus and the suggested answer favor option A for its specificity and security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

** View Question You have an Azure subscription. You create a new virtual network named VNet1. You plan to deploy an Azure web app named App1 that will use VNet1 and will be reachable by using private IP addresses. The solution must support inbound and outbound network traffic. What should you do?
A. Create an Azure App Service Hybrid Connection.
B. Create an Azure application gateway.
C. Create an App Service Environment.
D. Configure regional virtual network integration.

**

A

** C. Create an App Service Environment.

An App Service Environment (ASE) provides a fully isolated and scalable environment within a customer’s virtual network. This allows web apps deployed within the ASE to use private IP addresses and have both inbound and outbound network traffic supported. This directly addresses the requirement of using VNet1 and being reachable via private IP addresses while maintaining network connectivity.

Why other options are incorrect:

  • A. Create an Azure App Service Hybrid Connection: Hybrid connections are used to connect to on-premises resources, not to integrate with a VNet for private IP address access.
  • B. Create an Azure application gateway: Application gateways manage traffic to multiple web apps, but they don’t inherently provide the private IP address access within a VNet required by the question.
  • D. Configure regional virtual network integration: Regional VNet integration might offer connectivity between VNets, but it doesn’t directly address the need for a web app to use private IP addresses within a specific VNet (VNet1).

Note: The discussion shows some disagreement among users regarding the best answer, with some initially suggesting option D. However, the consensus and provided explanations ultimately favor option C as the most appropriate solution for the described scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

View Question You have an Azure subscription that contains a user named User1. You need to ensure that User1 can perform the following tasks: • Create groups. • Create access reviews for role-assignable groups. • Assign Azure AD roles to groups. The solution must use the principle of least privilege. Which role should you assign to User1?
A. Groups administrator
B. Authentication administrator
C. Identity Governance Administrator
D. Privileged role administrator

A

The correct answer is D. Privileged role administrator.

The Privileged Role Administrator role allows the user to perform all three tasks: creating groups, creating access reviews for role-assignable groups, and assigning Azure AD roles to groups. This aligns with the principle of least privilege as it grants only the necessary permissions.

Other options are incorrect because:

  • A. Groups administrator: This role primarily manages groups but does not inherently grant the ability to create access reviews or assign Azure AD roles.
  • B. Authentication administrator: This role focuses on managing authentication-related settings and does not provide the necessary permissions for group management or access reviews.
  • C. Identity Governance Administrator: While this role allows management of access reviews, it might not include the permission to create groups or assign all Azure AD roles. The discussion shows conflicting views on this option’s capability to handle all required tasks.

Note: There is some disagreement in the discussion regarding the capabilities of the Identity Governance Administrator role. While some users believe it can handle all the tasks, others argue it lacks some necessary permissions. The consensus leans towards Privileged Role Administrator as the most reliable and comprehensive option for fulfilling all the requirements while adhering to the principle of least privilege.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

** View Question You have an Azure subscription that contains a storage account named storage1 and a virtual machine named VM1. VM1 is connected to a virtual network named VNet1 that contains one subnet and uses Azure DNS. You need to ensure that VM1 connects to storage1 by using a private IP address. The solution must minimize administrative effort. What should you do?
A. For storage1, disable public network access.
B. On VNet1, create a new subnet.
C. For storage1, create a new private endpoint.
D. Create an Azure Private DNS zone.

**

A

** C. For storage1, create a new private endpoint.

A private endpoint provides a private IP address within the VNet1, allowing VM1 to access storage1 without traversing the public internet. This minimizes administrative effort compared to other options.

Why other options are incorrect:

  • A. For storage1, disable public network access. While disabling public access is a good security practice, it doesn’t guarantee that VM1 will use a private IP address to connect to storage1. VM1 might still use a public IP if other configurations are not properly set.
  • B. On VNet1, create a new subnet. Creating a new subnet is not directly related to enabling private connectivity to storage1. It might be necessary in other network design scenarios, but it’s not the most direct solution in this case.
  • D. Create an Azure Private DNS zone. While a private DNS zone is often used in conjunction with a private endpoint to resolve the private IP address of storage1, it’s not the primary solution. The private endpoint itself is the crucial component that provides the private connection. The discussion even highlights that a private endpoint needs private DNS integration for it to work properly.

Note: The discussion mentions that a private endpoint requires private DNS integration to function correctly. This is an important consideration, although the question itself doesn’t explicitly state that requirement. The optimal answer, therefore, is creating the private endpoint, and then configuring private DNS accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

View Question You have an Azure subscription that contains a web app named App1. App1 provides users with product images and videos. Users access App1 by using a URL of HTTPS://app1.contoso.com. You deploy two server pools named Pool1 and Pool2. Pool1 hosts product images. Pool2 hosts product videos. You need to optimize the performance of App1. The solution must meet the following requirements: • Minimize the performance impact of TLS connections on Pool1 and Pool2. • Route user requests to the server pools based on the requested URL path. What should you include in the solution?
A. Azure Bastion
B. Azure Front Door
C. Azure Traffic Manager
D. Azure Application Gateway

A

The correct answer is D. Azure Application Gateway.

Azure Application Gateway offers URL-based routing, allowing you to direct requests to different backend pools (Pool1 for images, Pool2 for videos) based on the URL path. Furthermore, Application Gateway handles TLS termination at the gateway level, minimizing the performance impact of TLS handshakes on the backend servers. This offloads the TLS processing from the backend servers, improving performance.

Why other options are incorrect:

  • A. Azure Bastion: Azure Bastion provides secure access to virtual machines, but it’s not relevant to optimizing web application performance or routing requests based on URL paths.
  • B. Azure Front Door: While Azure Front Door can handle global load balancing and TLS termination, it’s not ideal for this scenario because it doesn’t inherently provide URL-path based routing to different backend pools in the same way Application Gateway does. It’s better suited for global distribution.
  • C. Azure Traffic Manager: Azure Traffic Manager primarily handles traffic distribution based on health probes and geographic location. It doesn’t offer the fine-grained URL-path-based routing needed to separate image and video requests.

Note: The discussion highlights a preference for Azure Application Gateway over Azure Front Door for this specific scenario due to its superior TLS offloading capabilities and URL-based routing features. However, both services can handle TLS termination; the discussion emphasizes the better suitability of Application Gateway for this use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

** View Question You have an Azure subscription named Sub1. In Microsoft Defender for Cloud, you have a workflow automation named WF1. WF1 is configured to send an email message to a user named User1. You need to modify WF1 to send email messages to a distribution group named Alerts. What should you use to modify WF1?
A. Azure Logic Apps Designer
B. Azure Application Insights
C. Azure DevOps
D. Azure Monitor

**

A

** A. Azure Logic Apps Designer

Explanation: The question describes a workflow automation (WF1) within Microsoft Defender for Cloud that needs modification. Workflow automations, by their nature, are best managed and modified through a workflow designer. Azure Logic Apps Designer is a visual tool specifically designed for creating and managing logic apps, which are essentially automated workflows. Therefore, it’s the appropriate tool to modify WF1 to send emails to a different recipient (the distribution group “Alerts”).

Why other options are incorrect:

  • B. Azure Application Insights: This service is for monitoring and analyzing application performance, not for managing workflows.
  • C. Azure DevOps: This is a platform for collaborating on software development projects, not directly related to modifying security-related workflows in Defender for Cloud.
  • D. Azure Monitor: This provides monitoring and logging capabilities across Azure resources, but it does not offer a workflow designer to modify automations.

The discussion shows unanimous agreement on the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

View Question
SIMULATION
-
The developers at your company plan to create a web app named App28681041 and to publish the app to https://www.contoso.com.
You need to perform the following tasks:
• Ensure that App28681041 is registered to Azure AD.
• Generate a password for App28681041.
To complete this task, sign in to the Azure portal.

A

To complete the tasks, you must register the web application (App28681041) in Azure Active Directory (Azure AD) and then generate a client secret (password) for it. This involves using the Azure portal to register the application, providing necessary details like the application name and redirect URI (https://www.contoso.com in this case), and then creating a new client secret within the application’s settings. The image in the original post shows the Azure portal interface where these steps are performed.

The provided links in the discussion (https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app#register-an-application and https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app#add-a-client-secret) corroborate the correct procedure. While another link (https://learn.microsoft.com/en-us/azure/healthcare-apis/register-application) is mentioned, it’s not directly relevant to the core task of registering a generic web application in Azure AD and generating a client secret; it focuses on a more specific Healthcare APIs scenario. Multiple users in the discussion confirm the answer’s correctness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

View Question You are troubleshooting a security issue for an Azure Storage account. You enable Azure Storage Analytics logs and archive it to a storage account. What should you use to retrieve the diagnostics logs?
A. Azure Cosmos DB explorer
B. Azure Monitor
C. AzCopy
D. Microsoft Defender for Cloud

A

C. AzCopy

AzCopy is a command-line utility provided by Azure Storage that allows you to copy blobs (including log files) to and from a storage account. Since the Azure Storage Analytics logs are archived to a storage account, AzCopy is the appropriate tool to retrieve them.

Why other options are incorrect:

  • A. Azure Cosmos DB explorer: This tool is used to manage Azure Cosmos DB databases, not Azure Storage.
  • B. Azure Monitor: Azure Monitor is a monitoring service that collects and analyzes telemetry data from various Azure resources. While it can integrate with Azure Storage, it doesn’t directly retrieve the log files themselves.
  • D. Microsoft Defender for Cloud: This is a security service providing threat protection and detection, not directly involved in retrieving storage logs.

Note: The discussion indicates that Azure Storage Explorer could also be used to retrieve the logs, although AzCopy is explicitly mentioned as a method in Microsoft documentation. There appears to be some disagreement on the best tool, but AzCopy is presented as a valid and widely accepted solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

View Question You have an Azure subscription that contains a web app named App1. Users must be able to select between a Google identity or a Microsoft identity when authenticating to App1. You need to add Google as an identity provider in Azure AD. Which two pieces of information should you configure? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. a client ID
B. a tenant name
C. the endpoint URL of an application
D. a tenant ID
E. a client secret

A

The correct answer is A and E: a client ID and a client secret.

To add Google as an identity provider in Azure AD, you need to provide Azure AD with credentials that Google provides when you register your application with Google’s identity provider. These credentials allow Azure AD to verify the authenticity of requests coming from your application. The client ID uniquely identifies your application, while the client secret is a security key that must be kept confidential.

Options B, C, and D are incorrect. While a tenant ID (D) is related to your Azure AD setup and Google’s environment is involved in tenant names (B), they aren’t directly configured when adding Google as an identity provider to Azure. The endpoint URL (C) is relevant for the application itself, not for the integration of the identity provider.

Note: The discussion shows general agreement on the correct answer (A and E), although some users provide slightly different explanations of how to locate and use these credentials within the Azure portal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

View Question Your company has an Azure subscription named Sub1 that is associated to an Azure Active Directory (Azure AD) tenant named contoso.com. The company develops an application named App1. App1 is registered in Azure AD. You need to ensure that App1 can access secrets in Azure Key Vault on behalf of the application users. What should you configure?
A. an application permission without admin consent
B. a delegated permission without admin consent
C. a delegated permission that requires admin consent
D. an application permission that requires admin consent

A

The correct answer is C. a delegated permission that requires admin consent.

To allow App1 to access Azure Key Vault secrets on behalf of users, a delegated permission is required. This means the application will act on behalf of a user who has already authenticated. The “on behalf of” phrasing in the question implies that the application cannot grant this access itself; an administrator needs to explicitly grant consent. Therefore, the permission requires admin consent.

Why other options are incorrect:

  • A. an application permission without admin consent: Application permissions grant the application access directly, without the need for a user to be signed in. This doesn’t fit the scenario where the application needs to access secrets on behalf of users.
  • B. a delegated permission without admin consent: A delegated permission is needed, but without admin consent, the application won’t be authorized to access secrets on behalf of users. Individual users would have to consent, which isn’t what the question asks for.
  • D. an application permission that requires admin consent: While admin consent is required, an application permission is not suitable here. The application needs to act on behalf of a specific user.

Note: The discussion shows a unanimous agreement on answer C.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

** View Question

You have an Azure AD tenant that contains the users shown in the following table.

You enable passwordless authentication for the tenant. Which authentication method can each user use for passwordless authentication? To answer, drag the appropriate authentication methods to the correct users. Each authentication method may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

**

A

** The provided images show a drag-and-drop question. The correct answer, according to the “Suggested Answer” image (https://img.examtopics.com/az-500/image624.png), would map the users to the authentication methods as follows:

  • User A (Assigned Windows 10 device): Windows Hello for Business and/or FIDO2 security key. This is consistent with Microsoft documentation that states users with assigned Windows 10 devices can utilize these passwordless methods.
  • User B (No assigned Windows 10 device, registered mobile authenticator): Microsoft Authenticator app. This is because the mobile authenticator app is specifically designed for passwordless authentication on devices without Windows Hello for Business capabilities.
  • User C (No assigned Windows 10 device, no registered mobile authenticator): None. This user lacks the prerequisites for any of the listed passwordless options.

Why other options are incorrect: The question is a drag-and-drop, and the provided suggested answer represents the only correct mapping of users to available passwordless authentication methods based on the given information. Any other mapping would be incorrect based on the conditions for each authentication method.

Note: There is no explicit disagreement or conflicting opinions visible within the provided discussion. The discussion primarily points to relevant Microsoft documentation supporting the suggested answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

** View Question

You have an Azure AD tenant and an application named App1. You need to ensure that App1 can use Microsoft Entra Verified ID to verify credentials. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. (Image shows a drag-and-drop interface with options including: Create an Azure Key Vault instance; Configure the Verified ID service using the manual setup; Register an application in Microsoft Entra ID; Other options are not fully visible in the provided image)

**

A

** The correct sequence of actions is:

  1. Register an application in Microsoft Entra ID: This is the foundational step. Before you can use Microsoft Entra Verified ID, the application needs to be registered within Azure AD.
  2. Create an Azure Key Vault instance: A Key Vault is needed to securely store the cryptographic keys used by Verified ID.
  3. Configure the Verified ID service using the manual setup: This final step activates and configures the Verified ID service within your tenant, linking it to the registered application and Key Vault.

The order is crucial. You cannot configure the Verified ID service without a registered application and a Key Vault to store the keys.

Why other options are incorrect (if applicable): The provided discussion doesn’t list other specific incorrect options, but any sequence differing from the above would be incorrect because it would violate the dependency order required for successful Microsoft Entra Verified ID configuration. For instance, attempting to configure the Verified ID service before registering the application or creating a Key Vault would fail.

Note: The discussion indicates that a similar question appeared on an exam with different answer choices. While the provided answer is considered correct based on the given information and user consensus, there might be slight variations depending on the specific context of the question’s options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

** View Question

You have an Azure subscription that contains an Azure web app named App1. You plan to configure a Conditional Access policy for App1. The solution must meet the following requirements:

• Only allow access to App1 from Windows devices.
• Only allow devices that are marked as compliant to access App1.

Which Conditional Access policy settings should you configure? To answer, drag the appropriate settings to the correct requirements. Each setting may be used once, more than once, or not at all. (Images depicting drag-and-drop options are omitted as they are not directly included in the provided text, but were present in the original post).

**

A

** To meet the requirements, you should configure the following Conditional Access policy settings:

  • Only allow access to App1 from Windows devices: Under Conditions, select Device platforms, and then choose Windows. This ensures that only access requests originating from Windows devices are permitted.
  • Only allow devices that are marked as compliant to access App1: Under Conditions, select Device compliance. This restricts access only to devices that have been assessed as compliant by your device management system (Intune, etc.). Then, in the Grant controls, ensure that access is allowed only for compliant devices. This step needs to be executed after the “Conditions” step to ensure the access is granted only to compliant devices.

The overwhelming consensus in the discussion supports this solution. Multiple users reported success with this approach on their exams.

Why other options are incorrect: The provided discussion doesn’t offer alternative options, but implicitly, any option that doesn’t include both “Device platforms” under Conditions to select Windows, and “Device compliance” under Conditions (along with an appropriate grant control) would be incorrect because it would not fulfill both requirements of the question. For example, failing to specify “Windows” under device platforms would allow access from non-Windows devices, while omitting “Device compliance” would allow access from non-compliant devices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

View Question

You have an Azure subscription that contains a resource group named RG1 and an Azure policy named Policy1. You need to assign Policy1 to RG1. How should you complete the script? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

(Image shows a drag-and-drop question with blanks for: 1. Get-AzPolicyDefinition 2. New-AzPolicyAssignment and placeholders for “Policy definition” and “Scope” )

A

The correct solution involves using Get-AzPolicyDefinition to retrieve the policy definition and New-AzPolicyAssignment to assign it to the resource group. The Scope parameter in New-AzPolicyAssignment should specify the resource group’s ID. The provided image of the suggested answer shows the correct solution. The Get-AzPolicyDefinition command retrieves the policy definition object, which is then passed to the -PolicyDefinition parameter of New-AzPolicyAssignment. The Scope parameter of New-AzPolicyAssignment is set to the resource group’s ID, correctly targeting the assignment to RG1.

The discussion shows general agreement on the correct answer using these two commands. Several users confirm that the suggested answer is correct and cite Microsoft documentation supporting this approach. There is no significant disagreement within the discussion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

View Question You have an Azure subscription that contains the virtual machines shown in the following table.
Which computers will support file integrity monitoring?
A. Computer2 only
B. Computer1 and Computer2 only
C. Computer2 and Computer3 only
D. Computer1, Computer2, and Computer3

A

D. Computer1, Computer2, and Computer3

The provided image (missing from this text-based response, but present in the original URL) shows a table of virtual machines with their respective operating systems. File integrity monitoring is a security feature that can generally be implemented on various operating systems, including Windows Server and Linux distributions (like the ones likely represented in the missing image). Therefore, all three computers (Computer1, Computer2, and Computer3) would likely support file integrity monitoring, assuming appropriate agents or software are installed. The question does not provide details ruling out support for any specific machine.

Why other options are incorrect: Options A, B, and C incorrectly limit the number of computers capable of supporting file integrity monitoring. The question implies that all machines would be capable, subject to proper configuration.

Note: The provided discussion only shows the suggested answer and a user selecting answer D. No conflicting opinions are presented within this limited discussion. A full analysis would require access to the image to verify OS types and thus definitively confirm answer D.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

** View Question

You have an Azure subscription that contains the virtual machines shown in the following table.

Subnet1 and Subnet2 have a network security group (NSG). The NSG has an outbound rule that has the following configurations:
• Port: Any
• Source: Any
• Priority: 100
• Action: Deny
• Protocol: Any
• Destination: Storage

The subscription contains a storage account named storage1.
You create a private endpoint named Private1 that has the following settings:
• Resource type: Microsoft.Storage/storageAccounts
• Resource: storage1
• Target sub-resource: blob
• Virtual network: VNet1
• Subnet: Subnet1

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

**

A

** The correct answer is Yes, Yes, No.

  • Statement 1 (From VM2 you can create a container in storage1?): No. VM2 is in Subnet2, which is affected by the NSG’s outbound rule denying all traffic to storage accounts. The private endpoint only applies to Subnet1, so VM2 cannot access storage1.
  • Statement 2 (From VM1 you can upload data to the blob storage of storage1?): Yes. VM1 is in Subnet1, where the private endpoint Private1 is configured. The private endpoint creates a private connection to storage1, bypassing the NSG’s blocking rule. Therefore, VM1 can access and upload data to storage1.
  • Statement 3 (From VM2, you can upload data to the blob storage of storage1?): No. As explained above, the NSG blocks VM2’s access to storage1, and the private endpoint is not accessible from Subnet2.

Why other options are incorrect: The discussion shows disagreement on the correct answer. Some users incorrectly believe the private endpoint provides VNet-wide access, which is not the case. The private endpoint’s scope is limited to the subnet it’s deployed in (Subnet1 in this case). Therefore, answers suggesting “Yes, Yes, Yes” are incorrect because they fail to account for the NSG’s restrictive outbound rule applied to VM2. The answer suggesting “NYN” is partially correct, but fails to account for the functionality of the Private Endpoint granting access to VM1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

** View Question

On Monday, you configure an email notification in Microsoft Defender for Cloud to notify [emailprotected] about alerts that have a severity level of Low, Medium, or High. On Tuesday, Microsoft Defender for Cloud generates the security alerts shown in the following table.

(Image shows a table of alerts with timestamps, severity (High, Medium, Low), and descriptions. The exact content isn’t provided but is crucial to answering the question.)

How many email notifications will [emailprotected] receive on Tuesday? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

(Image shows a multiple choice answer area. The exact options aren’t provided but are implied by the suggested answer.)

(Image shows the suggested answer: 3 and 7. It indicates 3 emails for medium-severity and 7 for high-severity.)

**

A

** The suggested answer, 3 and 7, is likely correct based on the provided discussion, but the exact number depends on the content of the missing image showing the alert table. The discussion mentions that Defender for Cloud limits emails: approximately 4 high-severity, 2 medium-severity, and 1 low-severity emails per day. Therefore, if the table showed enough alerts of each severity to hit these limits, then the answer would be 4 for high and 2 for medium (although 3 and 7 is offered as a suggested solution in the images). The disagreement in the discussion highlights the uncertainty due to the missing alert data in the image.

WHY OTHER OPTIONS ARE INCORRECT (SPECULATIVE): Without the full table data shown in the missing image, it’s impossible to definitively say why other options would be incorrect. However, any answer exceeding the daily limits (4 high, 2 medium, 1 low) for a given severity would be incorrect due to the throttling mechanism described in the discussion. Answers significantly lower than the limits might indicate a miscounting of alerts in each severity level.

NOTE: The provided text lacks the crucial information of the alert table. The answer relies heavily on the interpretation of the discussion and the suggested answer, acknowledging the uncertainty introduced by the missing image data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
**** [View Question](https://www.examtopics.com/discussions/databricks/view/109831-exam-az-500-topic-6-question-3-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image654.png) VNet1 contains the subnets shown in the following table. ![Image](https://img.examtopics.com/az-500/image655.png) You plan to use the Azure portal to deploy an Azure firewall named AzFW1 to VNet1. Which resource group and subnet can you use to deploy AzFW1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image656.png) **
** The Azure firewall (AzFW1) should be deployed to **Resource Group RG2** and the subnet **AzureFirewallSubnet**. **Explanation:** The discussion reveals some conflicting information. While some users initially suggest that the subnet name is not critical and any empty subnet will work, later comments and references to Microsoft documentation confirm that the subnet *must* be named "AzureFirewallSubnet" for the deployment to succeed. The Azure firewall and the subnet must also reside in the same Resource Group. Based on the provided images, this is RG2. **Why other options are incorrect:** Selecting a different resource group or a subnet with a different name will result in deployment failure according to user experience and Microsoft documentation. The initial suggestions indicating flexibility in subnet naming were corrected in subsequent posts. **Note:** There is disagreement in the discussion regarding the subnet name. However, the consensus supported by Microsoft documentation points to the requirement of a subnet named "AzureFirewallSubnet" in the same Resource Group as the firewall.
26
**** [View Question](https://www.examtopics.com/discussions/databricks/view/109832-exam-az-500-topic-6-question-6-discussion/) You have an Azure subscription that is linked to an Azure AD tenant and contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-500/image658.png) **(VM Table: VM Name | VNET | Subnet | Private IP)** The subnets of the virtual networks have the service endpoints shown in the following table. ![Image](https://img.examtopics.com/az-500/image659.png) **(Subnet Table: VNET | Subnet | Service Endpoints)** You create the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image660.png) **(Resource Table: Resource Name | Resource Type | Location)** For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image661.png) **(Statements: 1. Connections from VM1 to storage1 always use IP address 10.1.1.5. 2. Connections from VM2 to Vault1 always use IP address 20.224.219.230. 3. Authentication from VM3 to the tenant uses either IP address 10.11.1.5 or 40.122.155.212.)** **
** Yes, No, No * **Statement 1 (Yes):** VM1 (in VNET1/Subnet1) connects to storage1. Since Subnet1 doesn't have a service endpoint for Azure Storage, the connection will use the public IP address (10.1.1.5 is a private IP; the image doesn't show a public IP for VM1, but the question implies a public route is used). * **Statement 2 (No):** VM2 (in VNET1/Subnet2) connects to Vault1. Subnet2 *does* have a service endpoint for Microsoft.KeyVault. Therefore, the connection will use the Azure backbone network and not the public IP address 20.224.219.230. * **Statement 3 (No):** VM3 authenticates to the Azure AD tenant. While VM3 has two IP addresses (private and public), using a private IP address for authentication to Azure AD is generally not possible because it requires the public internet to work. MFA also usually relies on factors outside the private network. While the statement is poorly worded, the intent is likely assessing whether only the public IP address would be used for authentication. **Why other options are incorrect:** The discussion shows some disagreement, primarily regarding statement 3. Some users suggest that MFA can't use a private IP. While this is generally true for standard MFA implementations, the specific configuration (not detailed in the provided information) might allow it under very specific circumstances, so the "No" answer reflects the most likely scenario. The other options presented in the discussion are inconsistent and provide no concrete reasoning.
27
[View Question](https://www.examtopics.com/discussions/databricks/view/109833-exam-az-500-topic-6-question-7-discussion/) You have an Azure subscription that contains an instance of Azure Firewall Standard named AzFW1. You need to identify whether you can use the following features with AzFW1: • TLS inspection • Threat intelligence • The network intrusion detection and prevention systems (IDPS) What can you use? A. TLS inspection only B. threat intelligence only C. TLS inspection and the IDPS only D. threat intelligence and the IDPS only E. TLS inspection, threat intelligence, and the IDPS
B. threat intelligence only The correct answer is B because, according to the discussion and linked Microsoft documentation, Azure Firewall Standard only supports threat intelligence. TLS inspection and the IDPS are features of the Azure Firewall Premium SKU. The overwhelming consensus in the discussion points to B as the correct answer, with multiple users reporting it as correct on actual exams. WHY OTHER OPTIONS ARE INCORRECT: * **A. TLS inspection only:** Incorrect. TLS inspection is a feature of the Premium SKU, not the Standard SKU. * **C. TLS inspection and the IDPS only:** Incorrect. Both TLS inspection and the IDPS are Premium SKU features. * **D. threat intelligence and the IDPS only:** Incorrect. The IDPS is a Premium SKU feature. * **E. TLS inspection, threat intelligence, and the IDPS:** Incorrect. This includes features exclusive to the Premium SKU. NOTE: The provided discussion shows a strong consensus on the answer being B. However, it is based on user experience and interpretations of Microsoft documentation, not official exam content.
28
**** [View Question](https://www.examtopics.com/discussions/databricks/view/109835-exam-az-500-topic-6-question-10-discussion/) You have an Azure Subscription that is connected to an on-premises datacenter and contains the resources shown in the following table. [Image](https://img.examtopics.com/az-500/image665.png) *(Image content not provided, but assumed to show details of VNet1 and VNet2, storage accounts, and Key Vault)* You need to configure virtual network service endpoints for VNet1 and VNet2. The solution must meet the following requirements: • The virtual machines that connect to the subnet of VNet1 must access storage1, storage2, and Azure AD by using the Microsoft backbone network. • The virtual machines that connect to the subnet of VNet2 must access storage1 and KeyVault1 by using the Microsoft backbone network. • The virtual machines must use the Microsoft backbone network to communicate between VNet1 and VNet2. How many service endpoints should you configure for each virtual network? **
** VNet1: 1 service endpoint for Microsoft.Storage. VNet2: 2 service endpoints: one for Microsoft.Storage and one for Microsoft.KeyVault. **Explanation:** The question requires using service endpoints to ensure all traffic remains on the Microsoft backbone network for security and performance. Azure AD is accessible through Azure Storage service endpoints, so a separate endpoint for Azure AD is unnecessary for VNet1. Therefore, only one service endpoint for Microsoft.Storage is needed for VNet1 to cover storage1 and storage2, along with Azure AD access. VNet2 requires separate endpoints for Microsoft.Storage (for storage1) and Microsoft.KeyVault (for KeyVault1). VNet-to-VNet communication over the Microsoft backbone is handled automatically when service endpoints are configured correctly. **Why other options are incorrect:** The provided discussion suggests a possible alternative answer (VNet1: 1 for Microsoft.Storage only; VNet2: 2 for Microsoft.Storage and Microsoft.KeyVault). However, this is consistent with the answer given above and reflects the optimal solution of using the minimum number of service endpoints. Adding extra service endpoints would not be incorrect, but it is not the most efficient approach. **Note:** The exact details of the image are missing. The answer assumes the image shows the necessary information to support the given requirements. The lack of image content introduces a potential source of error.
29
[View Question](https://www.examtopics.com/discussions/databricks/view/109837-exam-az-500-topic-7-question-1-discussion/) You have an Azure subscription that contains the resources shown in the following table. (Image depicts a table with Storage Account Name: sa1, Container Name: container1, File Share Name: share1) You create a shared access token as shown in the following exhibit. (Image depicts a shared access token creation using Key1) Which resources can you access by using the shared access token and Key1? To answer, select the appropriate options in the answer area. (Image shows multiple choice options but the options are not relevant as the answer is provided below)
Using the shared access token created with Key1: * **Only container1 is accessible.** The shared access signature (SAS) token, as explained in the discussion, is created against the storage account's key (Key1 in this case). While both container1 and share1 reside within the storage account (sa1), a SAS token created specifically for a container grants access *only* to that container. The file share is a separate resource and is not included in the permissions granted by the container-specific SAS. Other options are incorrect because: The discussion clearly states that while Key1 has access to both container1 and share1 within the storage account, the SAS token was specifically generated for container1, limiting access to that container only. Therefore, selecting share1 as accessible with the provided SAS token would be incorrect. There is agreement among the discussion participants on this point.
30
**** [View Question](https://www.examtopics.com/discussions/databricks/view/109839-exam-az-500-topic-7-question-3-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image672.png) SQL1 has the following configurations: • Auditing: Enabled • Audit log destination: storage1, Workspace1 DB1 has the following configurations: • Auditing: Enabled • Audit log destination: storage2 DB2 has auditing disabled. Where are the audit logs for DB1 and DB2 stored? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image673.png) **
** * **DB1:** storage1, storage2, and Workspace1 * **DB2:** storage1 and Workspace1 **Explanation:** The audit logs are stored in the locations specified by both the server-level and database-level auditing configurations. Since auditing is enabled at both the server (SQL1) and database (DB1) levels, the logs for DB1 will be written to all specified locations: storage1, Workspace1 (from SQL1's server-level configuration), and storage2 (from DB1's database-level configuration). For DB2, auditing is disabled at the database level. However, because auditing is enabled at the server level (SQL1), the server-level settings apply, resulting in logs for DB2 being written to storage1 and Workspace1. The discussion shows some disagreement on the interpretation of how server and database auditing interact. However, the most upvoted and seemingly correct interpretation is that both server and database level configurations are additive, not overriding. **Why other options are incorrect:** Any answer that doesn't include all three locations for DB1 (storage1, storage2, Workspace1) and both locations for DB2 (storage1, Workspace1) is incorrect based on the accepted interpretation of how server and database-level auditing combine their settings. The provided suggested answer image confirms this.
31
**** [View Question](https://www.examtopics.com/discussions/databricks/view/109875-exam-az-500-topic-7-question-2-discussion/) You have an Azure subscription that contains an Azure SQL server named SQL1. SQL1 contains an Azure SQL database named DB1. You need to use Microsoft Defender for Cloud to complete a vulnerability assessment for DB1. What should you do first? A. From Advanced Threat Protection types, select SQL injection vulnerability. B. Configure the Send scan report to setting. C. Set Periodic recurring scans to ON. D. Enable the Microsoft Defender for SQL plan. **
** D. Enable the Microsoft Defender for SQL plan. Before you can perform a vulnerability assessment using Microsoft Defender for Cloud on your Azure SQL database (DB1), you must first enable the Microsoft Defender for SQL plan within your Azure subscription. The vulnerability assessment features are not active until this plan is enabled. Options A, B, and C all require the Defender for SQL plan to be enabled first; they are subsequent steps in the process. **Why other options are incorrect:** * **A. From Advanced Threat Protection types, select SQL injection vulnerability:** This is a step *after* enabling the plan. You need to have the plan enabled to access and configure threat protection types. * **B. Configure the Send scan report to setting:** This is a configuration option *after* the plan is enabled and the assessment is running. * **C. Set Periodic recurring scans to ON:** Similar to option B, this is a scheduling option, and requires that the plan is enabled and the vulnerability assessment is already configured. **Note:** While the discussion shows strong consensus on option D, there is a minor difference in opinion with one user suggesting that after enabling the plan, you would then proceed to option A. However, the overall consensus remains that enabling the plan is the very first required step.
32
**** [View Question](https://www.examtopics.com/discussions/databricks/view/109883-exam-az-500-topic-4-question-102-discussion/) You have an Azure subscription and the computers shown in the following table. | Computer Name | Type | OS | |-----------------|------------------|---------------| | VM1 | Virtual Machine | Windows Server| | VM2 | Virtual Machine | Windows Server| | Server1 | Virtual Machine | Windows Server| | VMSS1_0 | Virtual Machine Scale Set | Windows Server | You need to perform a vulnerability scan of the computers by using Microsoft Defender for Cloud. Which computers can you scan? A. VM1 only B. VM1 and VM2 only C. Server1 and VMSS1_0 only D. VM1, VM2, and Server1 only E. VM1, VM2, Server 1, and VMSS1_0 **
** E. VM1, VM2, Server 1, and VMSS1_0 Microsoft Defender for Cloud can scan all the listed virtual machines, including those within a Virtual Machine Scale Set (VMSS). The discussion confirms that VMSS instances are scannable via Defender for Cloud's vulnerability management capabilities, using either agent-based or agentless scanning depending on the license. **Why other options are incorrect:** * **A, B, C, and D:** These options exclude at least one of the scannable virtual machines. The discussion and suggested answer explicitly state that all four VMs (including the VMSS) are scannable by Microsoft Defender for Cloud. **Note:** The discussion shows some disagreement on the correct answer initially. Some users initially believed only a subset of the machines could be scanned, but the final consensus, supported by documentation links, points towards all four machines being scannable.
33
[View Question](https://www.examtopics.com/discussions/databricks/view/109885-exam-az-500-topic-4-question-97-discussion/) You have an Azure subscription named Sub1 that uses Microsoft Defender for Cloud. You have the management group hierarchy shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image640.png) You create the definitions shown in the following table. ![Image](https://img.examtopics.com/az-500/image641.png) You need to use Defender for Cloud to add a security policy. Which definitions can you use as a security policy? A. Policy1 only B. Policy1 and Initiative1 only C. Initiative1 and Initiative2 only D. Initiative1, Initiative2, and Initiative3 only E. Policy1, Initiative1, Initiative2, and Initiative3
D. Initiative1, Initiative2, and Initiative3 only Explanation: Microsoft Defender for Cloud applies security *initiatives* to subscriptions, not individual policies. The images show that Initiative1 and Initiative2 are assigned to management groups above Sub1, therefore they are inherited by Sub1 and can be used as security policies within Defender for Cloud. Initiative3 is directly assigned to MG2, which is a parent of Sub1, and is also inherited. Policy1 is not a valid option because Defender for Cloud uses initiatives. MG1 is not a relevant option as it doesn't contain any subscriptions. Why other options are incorrect: * **A. Policy1 only:** Incorrect because Defender for Cloud utilizes security initiatives, not individual policies, for applying security settings at scale. * **B. Policy1 and Initiative1 only:** Incorrect for the same reason as A, and also because Initiative2 and 3 are also available to Sub1. * **C. Initiative1 and Initiative2 only:** Incorrect because Initiative3 is also applicable to Sub1 due to inheritance. * **E. Policy1, Initiative1, Initiative2, and Initiative3:** Incorrect because Policy1 cannot be used as a security policy within Defender for Cloud. Note: The discussion shows some disagreement on the correct answer. While the suggested answer is D, some users initially believed C was correct, focusing solely on Initiatives 1 and 2 and neglecting the inheritance aspect of Initiative 3. The provided explanation clarifies the inheritance factor that makes option D the most accurate.
34
[View Question](https://www.examtopics.com/discussions/databricks/view/109886-exam-az-500-topic-4-question-103-discussion/) You have an Azure subscription that uses Microsoft Defender for Cloud. The subscription contains the Azure Policy definitions shown in the following table. ![Image](https://img.examtopics.com/az-500/image643.png) Which definitions can be assigned as a security policy in Defender for Cloud? A. Policy1 and Policy2 only B. Initiative1 and Initiative2 only C. Policy1 and Initiative1 only D. Policy2 and Initiative2 only E. Policy1, Policy2, Initiative1, and Initiative2
B. Initiative1 and Initiative2 only Explanation: Based on the provided text and the highly upvoted responses, only security initiatives can be assigned as security policies within Microsoft Defender for Cloud. Individual policies are components *within* initiatives. While some users incorrectly suggest that individual policies can be assigned directly, the consensus and most upvoted responses support the idea that only initiatives are assignable in this context. Why other options are incorrect: * **A. Policy1 and Policy2 only:** Incorrect because individual policies cannot be directly assigned as security policies in Defender for Cloud; they are grouped within initiatives. * **C. Policy1 and Initiative1 only:** Incorrect for the same reason as A; Policy1 is not directly assignable. * **D. Policy2 and Initiative2 only:** Incorrect for the same reason as A; Policy2 is not directly assignable. * **E. Policy1, Policy2, Initiative1, and Initiative2:** Incorrect because individual policies (Policy1 and Policy2) cannot be assigned directly. Note: There is a disagreement among users regarding the correct answer. While some users believe that both policies and initiatives can be assigned, the most upvoted and seemingly most accurate answer, supported by user billo79152718's experience, indicates that only initiatives can be assigned as security policies within Defender for Cloud.
35
[View Question](https://www.examtopics.com/discussions/databricks/view/109896-exam-az-500-topic-2-question-102-discussion/) HOTSPOT You have an Azure subscription that contains a user named Admin1 and an Azure key vault named Vault1. You plan to implement Microsoft Entra Verified ID. You need to create an access policy to ensure that Admin1 has permissions to Vault1 that support the implementation of the Verified ID service. The solution must use the principle of least privilege. Which three key permissions should you select? To answer, select the appropriate permissions in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image631.png) (This image is not provided and thus its content cannot be included) ![Image](https://img.examtopics.com/az-500/image632.png) (This image is not provided and thus its content cannot be included)
The question requires selecting three key permissions from a list (not provided here due to missing images) to grant Admin1 access to Vault1 for Microsoft Entra Verified ID implementation, adhering to least privilege. The suggested answer (also not provided due to missing images) indicates the three correct permissions. Without the images showing the available permissions, a specific list of permissions cannot be given. The explanation would involve justifying why those specific three permissions are necessary for Verified ID and why others would be excessive, violating the principle of least privilege. Note: This answer is incomplete because the crucial images displaying the permission options and the suggested answer are missing. Therefore, a definitive list of the correct permissions and an explanation of why they are correct cannot be provided. The discussion only confirms the suggested answer's correctness without detailing the specific permissions.
36
[View Question](https://www.examtopics.com/discussions/databricks/view/109897-exam-az-500-topic-4-question-95-discussion/) You have an Azure subscription named Sub1 that contains the resource groups shown in the following table. ![Image](https://img.examtopics.com/az-500/image635.png) *(Image contains a table showing Resource Group names and locations)* You create the Azure Policy definition shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image636.png) *(Image shows a policy definition with a condition: "not contains('Microsoft.Storage/storageAccounts', 'location', 'WestUS')")* You assign the policy to Sub1. You plan to create the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image637.png) *(Image shows a table of planned resources: Resource Group, Location, Resource Type)* For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image638.png) *(Image shows a table with statements: "Deployment of storage account in RG1 will be allowed", "Deployment of storage account in RG2 will be allowed")*
The correct answers are: * **Deployment of storage account in RG1 will be allowed: Yes** * **Deployment of storage account in RG2 will be allowed: No** **Explanation:** The Azure Policy's condition `not contains('Microsoft.Storage/storageAccounts', 'location', 'WestUS')` checks if the location of a storage account does *not* contain "WestUS". The comparison is case-insensitive, as noted in the provided documentation link. * **RG1 (EastUS):** A storage account deployed in RG1 (EastUS) will pass the policy because its location ("EastUS") does not contain "WestUS". * **RG2 (WestUS):** A storage account deployed in RG2 (WestUS) will fail the policy because its location ("WestUS") contains "WestUS". The policy will deny the deployment. **Why other options are incorrect:** The question is a true/false assessment of the policy's effect on the described deployments. Incorrect answers would stem from misinterpreting the case sensitivity of the `contains` function within the Azure Policy condition or misunderstanding the policy's logic. **Note:** The provided discussion highlights the case-insensitivity of the `contains` function in Azure Policy, which is crucial for accurately evaluating the policy's effect.
37
[View Question](https://www.examtopics.com/discussions/databricks/view/109923-exam-az-500-topic-7-question-6-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-500/image677.png) You have an Azure Cosmos DB account named cosmos1 configured as shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image678.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image679.png)
N, Y, Y * **Statement 1 (VM1): No.** VM1 is in VNet1/Subnet1, which has a service endpoint enabled for Azure Cosmos DB. This means VM1 connects to Cosmos DB via the Microsoft backbone network, not the public internet. Therefore, the statement that it uses its public IP address is false. * **Statement 2 (VM2): Yes.** VM2 is in VNet1/Subnet2, which does *not* have a service endpoint enabled for Azure Cosmos DB. Consequently, VM2 uses its public IP address to connect to Cosmos DB. * **Statement 3 (VM3): Yes.** VM3 is in VNet2, which also does *not* have a service endpoint enabled for Azure Cosmos DB. Therefore, VM3 also uses its public IP address to connect to Cosmos DB. The provided images show the configurations of the VMs and the Cosmos DB account, confirming the analysis above. The discussion supports this answer, with Alexbz's highly-voted response providing the same solution.
38
[View Question](https://www.examtopics.com/discussions/databricks/view/109983-exam-az-500-topic-6-question-9-discussion/) You need to ensure that only devices connected to a 131.107.0.0/16 subnet can access data in the rg1lod28681041 Azure Storage account. To complete this task, sign in to the Azure portal.
To restrict access to the rg1lod28681041 Azure Storage account to only devices on the 131.107.0.0/16 subnet, follow these steps: 1. **Navigate to the Storage Account:** In the Azure portal, locate and select the rg1lod28681041 storage account. 2. **Access Firewalls and Virtual Networks:** In the left-hand navigation menu, under "Security + networking," click on "Firewalls and virtual networks." 3. **Configure Network Restrictions:** Under "Allow access from," select the "Selected networks" option. 4. **Add Virtual Network:** Click "Add existing virtual network." Select the virtual network containing the 131.107.0.0/16 subnet. 5. **Select Subnet:** In the "Subnet" section, choose the 131.107.0.0/16 subnet and click "Add." 6. **Save Changes:** Click "Save" to apply the network restrictions. This approach uses Azure Storage's built-in network security features to limit access based on virtual network subnets. Only clients within the specified subnet will be able to access the storage account. Note: There is a discussion point questioning the realistic use of a public IP range (131.107.0.0/16) for a virtual network. While the provided solution technically works, the practicality of the scenario is debated.
39
[View Question](https://www.examtopics.com/discussions/databricks/view/110007-exam-az-500-topic-2-question-94-discussion/) You have an Azure subscription that is linked to an Azure AD tenant and contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image621.png) Which resources can be assigned the Contributor role for VM1? A. Managed1 and App1 only B. Group1 and Managed1 only C. Group1, Managed1, and VM2 only D. Group1, Managed1, VM1, and App1 only
A. Managed1 and App1 only The provided image shows that VM1 is a Virtual Machine. Azure Role-Based Access Control (RBAC) allows assigning roles to users, groups, service principals, or managed identities. Therefore, only Managed identities (Managed1) and service principals (App1) can be assigned the Contributor role for VM1. Group1 is a group and cannot directly be assigned the role to a VM, It could be assigned the role of a resource group that encompasses VM1. VM2 is a separate VM and cannot directly grant the Contributor role to VM1. The suggested answer is A. Other Options: * **B. Group1 and Managed1 only:** Incorrect because Group1, being a group, cannot be directly assigned the Contributor role to a specific VM. * **C. Group1, Managed1, and VM2 only:** Incorrect for the same reason as B, and because VM2 is a different VM and can't directly control VM1's access. * **D. Group1, Managed1, VM1, and App1 only:** Incorrect because VM1 cannot be assigned the Contributor role to itself. Note: The discussion shows disagreement on the correct answer. The provided answer is based on the generally accepted understanding of Azure RBAC and the capabilities of different Azure AD principals. The discussion highlights a need for a clearer understanding of context in such questions.
40
[View Question](https://www.examtopics.com/discussions/databricks/view/110053-exam-az-500-topic-7-question-4-discussion/) You need to prevent HTTP connections to the rg1lod28681041n1 Azure Storage account. To complete this task, sign in to the Azure portal. What setting should be configured?
The setting that needs to be configured is "Secure transfer required" and it should be enabled. This setting forces all communication with the Azure Storage account to use HTTPS, effectively preventing HTTP connections. The image in the original post shows a screen within the Azure portal's storage account settings, indicating the configuration of this specific option. Both jorgesoma and Wassanto in the discussion confirmed this as the correct solution. There are no other options provided in the original content to evaluate as incorrect. The question focuses on a single solution to a specific problem.
41
[View Question](https://www.examtopics.com/discussions/databricks/view/11020-exam-az-500-topic-2-question-8-discussion/) DRAG DROP - You need to configure an access review. The review will be assigned to a new collection of reviews and reviewed by resource owners. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0006400001.jpg) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0006500001.jpg) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0006600001.jpg)
The correct sequence is: 1. **Create an access review program:** This creates a container to hold your access reviews. The question specifies a "new collection of reviews," indicating the need for a new program. 2. **Create an access review control:** This defines the scope, settings, and specifics of the review itself (e.g., users, resources, review period). 3. **Set Reviewers to Group owners:** This fulfills the requirement that the review be conducted by resource owners. If the resources are owned by groups, setting the reviewers to group owners is appropriate. Other Options (implicitly): The order of these three steps is crucial. You cannot create a control without a program to house it, and you cannot assign reviewers without a control to assign them to. Any other sequence would be incorrect because of the dependencies between these actions. Note: The discussion shows unanimous agreement on this answer and its reasoning.
42
**** [View Question](https://www.examtopics.com/discussions/databricks/view/110419-exam-az-500-topic-2-question-100-discussion/) You need to ensure that a user named user2-28681041 can manage the properties of the virtual machines in the RG1lod28681041 resource group. The solution must use the principle of least privilege. To complete this task, sign in to the Azure portal. **
** The correct answer is **Virtual Machine Contributor**. This role provides the necessary permissions to manage virtual machine properties without granting excessive privileges. The principle of least privilege dictates assigning only the necessary permissions, and "Virtual Machine Contributor" fulfills this requirement. **Why other options are incorrect:** The provided discussion does not offer alternative options, however, roles with broader permissions (like Owner) would violate the principle of least privilege. More narrowly defined roles might not grant sufficient access to manage all virtual machine properties. **Note:** The discussion shows a consensus among users that "Virtual Machine Contributor" is the correct answer.
43
[View Question](https://www.examtopics.com/discussions/databricks/view/110421-exam-az-500-topic-2-question-101-discussion/) You need to create a new Azure AD directory named 28681041.onmicrosoft.com. The new directory must contain a new user named [email protected] To complete this task, sign in to the Azure portal.
The provided image from the link shows the steps to create a new Azure AD directory and a new user within that directory. The steps would involve navigating to Azure Active Directory, selecting "New" then "Directory," entering the required details for 28681041.onmicrosoft.com (including potentially accepting the terms of service), and then creating a new user within this newly created directory. After creating the directory, you would navigate to users and create a new user, filling out all the necessary details to create the user with the email address [email protected] The process involves several steps in the Azure portal, as depicted in the image, and is not a single action. There is no disagreement in the provided discussion, only confirmation that the suggested answer (implied by the image) is correct.
44
**** [View Question](https://www.examtopics.com/discussions/databricks/view/111076-exam-az-500-topic-6-question-8-discussion/) You need to configure Azure to allow RDP connections from the Internet to a virtual machine named VM1. The solution must minimize the attack surface of VM1. To complete this task, sign in to the Azure portal. (Use the following login credentials as needed: Azure Username: [email protected] Azure Password: Gp0Ae4@!Dg) If the Azure portal does not load successfully in the browser, press CTRL-K to reload the portal in a new browser tab. The following information is for technical support purposes only: Lab Instance: 28681041 **
** The best solution to allow RDP connections from the internet to VM1 while minimizing its attack surface is to use Azure Bastion. Azure Bastion provides secure RDP access through the Azure internal network without directly exposing VM1 to the public internet. This inherently reduces the attack surface compared to opening RDP port 3389 directly on the VM's Network Security Group (NSG). While Just-in-Time (JIT) access can further restrict access, Bastion offers a more integrated and streamlined secure access method. Simply allowing inbound RDP traffic via an NSG rule, as suggested by some, directly exposes the VM to potential attacks, negating the requirement to minimize the attack surface. **Why other options are incorrect:** * **Opening RDP port 3389 directly via NSG:** This directly exposes VM1 to the internet, increasing the attack surface. This is the least secure option. * **Just-in-Time (JIT) access alone:** While JIT reduces the window of vulnerability by only allowing RDP connections during specific times, it still requires opening port 3389, leaving the VM vulnerable during those periods. While a valid security enhancement, it doesn't minimize the attack surface as effectively as Azure Bastion. **Note:** There is disagreement among users regarding the optimal solution. Some suggest using NSGs, while others favor JIT or Azure Bastion. The provided answer reflects the most secure approach based on minimizing the attack surface.
45
[View Question](https://www.examtopics.com/discussions/databricks/view/112323-exam-az-500-topic-4-question-105-discussion/) You have an Azure subscription that uses Microsoft Defender for Cloud. You have accounts for the following cloud services: • Alibaba Cloud • Amazon Web Services (AWS) • Google Cloud Platform (GCP) What can you add to Defender for Cloud? A. AWS only B. Alibaba Cloud and AWS only C. Alibaba Cloud and GCP only D. AWS and GCP only E. Alibaba Cloud, AWS, and GCP
The correct answer is **D. AWS and GCP only**. Microsoft Defender for Cloud currently integrates with AWS and GCP for multi-cloud security monitoring. Alibaba Cloud is not currently supported for this integration, as indicated by the overwhelming consensus in the discussion and supported by the provided Microsoft Learn link. WHY OTHER OPTIONS ARE INCORRECT: * **A. AWS only:** This is incorrect because Defender for Cloud also supports integration with GCP. * **B. Alibaba Cloud and AWS only:** This is incorrect because Alibaba Cloud is not currently supported by Microsoft Defender for Cloud. * **C. Alibaba Cloud and GCP only:** This is incorrect because Alibaba Cloud is not currently supported by Microsoft Defender for Cloud. * **E. Alibaba Cloud, AWS, and GCP:** This is incorrect because Alibaba Cloud is not currently supported by Microsoft Defender for Cloud. NOTE: The discussion overwhelmingly supports answer D. While there is no explicit mention of *why* Alibaba Cloud isn't included, the consensus and provided link clearly indicate this to be the case at the time the question was asked. Future updates to Microsoft Defender for Cloud *could* change this.
46
[View Question](https://www.examtopics.com/discussions/databricks/view/112325-exam-az-500-topic-4-question-106-discussion/) You have an Azure subscription. You plan to map an online infrastructure and perform vulnerability scanning for the following: • ASNs • Hostnames • IP addresses • SSL certificates What should you use? A. Microsoft Defender for Cloud B. Microsoft Defender External Attack Surface Management (Defender EASM) C. Microsoft Defender for Identity D. Microsoft Defender for Endpoint
The correct answer is B. Microsoft Defender External Attack Surface Management (Defender EASM). Defender EASM is specifically designed to discover and inventory assets like ASNs, hostnames, IP addresses, and SSL certificates, enabling vulnerability scanning of your external attack surface. The discussion overwhelmingly supports this answer, with numerous users reporting it as correct and citing Microsoft documentation. Why other options are incorrect: * **A. Microsoft Defender for Cloud:** While Defender for Cloud offers security features, it's not primarily focused on mapping and scanning *external* assets like ASNs and SSL certificates. It's more focused on the security posture of resources *within* your Azure environment. * **C. Microsoft Defender for Identity:** This service focuses on the security of your on-premises and cloud-based identities, not on external infrastructure mapping and vulnerability scanning. * **D. Microsoft Defender for Endpoint:** This is designed to protect endpoints (computers, servers) within your organization's network, not the broader external attack surface. Note: The discussion shows unanimous agreement on the correct answer.
47
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112328-exam-az-500-topic-4-question-107-discussion/) You have an Azure subscription that uses Microsoft Defender for Cloud. You plan to use the Secure Score Over Time workbook. You need to configure the Continuous export settings for the Defender for Cloud data. Which two settings should you configure? NOTE: Each correct selection is worth one point. (Image shows a screenshot with options for "Export target" and "Export frequency".) **
** To use the Secure Score Over Time workbook, you must configure Continuous Export in Microsoft Defender for Cloud to export data to a Log Analytics workspace. The two settings to configure are: 1. **Export target:** Log Analytics workspace. This is where the Secure Score data will be stored and made available for the workbook. 2. **Export frequency:** Streaming and Snapshots. This ensures that you receive both real-time updates ("Streaming") and periodic snapshots of the data ("Snapshots"), providing a comprehensive view of your Secure Score over time. **Why other options are incorrect:** The question specifically asks for the *two* settings required to configure continuous export for the Secure Score Over Time workbook. While other settings might exist within the Continuous Export configuration, only these two are explicitly necessary for the workbook's functionality based on provided information. The provided discussion and suggested answers strongly imply that these are the only two critical selections. **Note:** The discussion shows some minor disagreement in the details of the configuration steps. However, all contributors agree on the necessity of exporting to a Log Analytics workspace and using both streaming and snapshot frequencies.
48
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112329-exam-az-500-topic-4-question-108-discussion/) You are troubleshooting a security issue for an Azure Storage account. You enable Azure Storage Analytics logs and archive it to a storage account. What should you use to retrieve the diagnostics logs? A. Azure Cosmos DB explorer B. SQL query editor in Azure C. AzCopy D. File Explorer in Windows **
** C. AzCopy AzCopy is a command-line utility designed to copy data to and from Azure Blob Storage, which is where Azure Storage Analytics logs are typically stored. Therefore, it's the appropriate tool to retrieve these logs. **Why other options are incorrect:** * **A. Azure Cosmos DB explorer:** Azure Cosmos DB is a NoSQL database service; it's not related to accessing storage analytics logs. * **B. SQL query editor in Azure:** While Azure offers SQL-based services, storage analytics logs aren't structured as SQL databases and therefore can't be queried using a SQL editor. * **D. File Explorer in Windows:** File Explorer provides access to local files. Azure Storage is a cloud-based service, and File Explorer cannot directly access it. **Note:** The discussion indicates some disagreement on the answer, with one user suggesting Azure Storage Explorer as an alternative. While Azure Storage Explorer is a valid tool for accessing Azure storage data, AzCopy is generally considered the more appropriate command-line tool for programmatic retrieval and potentially larger datasets, as highlighted by the repeated appearance of this question with the answer consistently being AzCopy.
49
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112331-exam-az-500-topic-4-question-110-discussion/) You have an Azure subscription that contains a Microsoft Defender External Attack Surface Management (Defender EASM) resource named EASM1. EAMS1 contains the inventory assets shown in the following table. | Asset Name | Asset Type | Asset State | |---|---|---| | VM1 | Virtual Machine | Approved Inventory | | VM2 | Virtual Machine | Candidate | Which assets are scanned daily, and which assets will display in the default dashboard charts? To answer, select the appropriate options in the answer area. **
** Only VM1 is scanned daily and displays in the default dashboard charts. **Explanation:** Based on the provided text, "Approved Inventory" assets are scanned daily and are always represented in the default dashboard charts. VM1 is the only asset with the "Approved Inventory" status. VM2, having a "Candidate" status, is only scanned during the discovery process and does not appear in the default dashboard charts. **Why other options are incorrect:** There is no information provided to suggest any asset other than those with "Approved Inventory" status are scanned daily or displayed in default dashboard charts. The discussion highlights some ambiguity around whether "Candidate" assets are scanned daily (beyond initial discovery); however, the consensus leans towards only "Approved Inventory" assets fitting both criteria. Therefore, any option including VM2 as daily scanned or displayed on the default dashboard is incorrect based on the provided information. **Note:** There is some disagreement in the discussion regarding whether non-"Approved Inventory" assets are scanned daily. The answer provided here reflects the most likely interpretation based on the provided text and the suggested answer.
50
[View Question](https://www.examtopics.com/discussions/databricks/view/112332-exam-az-500-topic-4-question-111-discussion/) You have an Azure subscription that uses Microsoft Defender for Cloud. You have an Amazon Web Services (AWS) account named AWS1 that is connected to Defender for Cloud. You need to ensure that AWS1 uses AWS Foundational Security Best Practices. The solution must minimize administrative effort. What should you do in Defender for Cloud? A. Assign a built-in compliance standard. B. Create a new custom standard. C. Assign a built-in assessment. D. Create a new custom assessment.
The correct answer is **A. Assign a built-in compliance standard.** Microsoft Defender for Cloud offers built-in compliance standards, including one for AWS Foundational Security Best Practices. Assigning this pre-built standard directly applies the necessary checks and minimizes the administrative overhead of creating a custom solution. This aligns with the requirement to minimize administrative effort. Why other options are incorrect: * **B. Create a new custom standard:** This would require significant manual effort to define all the necessary checks for AWS Foundational Security Best Practices, contradicting the requirement to minimize administrative effort. * **C. Assign a built-in assessment:** Assessments focus on specific aspects of security, not a holistic compliance standard like AWS Foundational Security Best Practices. While assessments are useful, they don't directly address the requirement to ensure the AWS account uses the entire standard. * **D. Create a new custom assessment:** Similar to option B, creating a custom assessment requires significant manual work to define checks, directly conflicting with the requirement for minimal administrative effort. Note: The discussion shows unanimous agreement on answer A.
51
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112334-exam-az-500-topic-4-question-112-discussion/) You plan to deploy a custom policy initiative for Microsoft Defender for Cloud. You need to identify all the resource groups that have a Delete lock. How should you complete the policy definition? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image691.png) (Image shows a policy definition with blanks to fill in for "field" and "equals" within an "existenceCondition".) **
** The correct answer, as indicated by the suggested answer and multiple users in the discussion, requires selecting `"field": "Microsoft.Authorization/locks/level"` and `"equals": "[parameters('expressRouteLockLevel')]"` within the `existenceCondition`. * **Explanation:** This policy definition uses an `existenceCondition` to check for the presence of resources of type `Microsoft.Authorization/locks`. The `field` specifies the attribute to check within those resources (the lock level), and `equals` compares that attribute to a parameter value (`expressRouteLockLevel`), presumably defining the specific lock level to search for (e.g., "Delete"). The policy will then audit if a lock of the specified level doesn't exist. This effectively identifies resource groups with the specified delete lock. The parameter `expressRouteLockLevel` would need to be appropriately defined elsewhere in the policy definition to specify "Delete". The discussion shows a consensus that the suggested answer is correct. There are no conflicting opinions presented. The provided links and comments support the correctness of the answer. **Why other options are incorrect:** The question doesn't provide alternative options to evaluate. The provided information only focuses on the correct selections within the `existenceCondition`. Any other options would likely fail to correctly target and identify resource groups with delete locks.
52
[View Question](https://www.examtopics.com/discussions/databricks/view/112335-exam-az-500-topic-4-question-114-discussion/) You have an Azure subscription that contains a Microsoft Defender External Attack Surface Management (Defender EASM) resource named EASM1. You review the Attack Surface Summary dashboard. You need to identify the following insights: • Deprecated technologies that are no longer supported • Infrastructure that will soon expire Which section of the dashboard should you review? A. Securing the Cloud B. Sensitive Services C. Attack Surface Priorities D. attack surface composition
C. Attack Surface Priorities Explanation: The discussion overwhelmingly supports option C as the correct answer. The "Attack Surface Priorities" section of the Defender EASM dashboard provides insights into deprecated technologies and infrastructure nearing expiration, aligning with the question's requirements. Multiple users cite the official Microsoft documentation (https://learn.microsoft.com/en-us/azure/external-attack-surface-management/understanding-dashboards) to support this answer. Why other options are incorrect: * **A. Securing the Cloud:** This section likely covers broader security aspects, not specifically focusing on deprecated technologies or expiring infrastructure. * **B. Sensitive Services:** This section focuses on the security of sensitive data and services, not directly related to the question's needs. * **D. attack surface composition:** While related to the overall attack surface, this section likely provides a more general overview than the specific insights required by the question. Note: There is no significant disagreement among the users in the discussion regarding the correct answer. The consensus strongly points to option C.
53
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112357-exam-az-500-topic-6-question-13-discussion/) You have an Azure subscription that contains the subnets shown in the following table. ![Image](https://img.examtopics.com/az-500/image696.png) The subscription contains an Azure web app named WebApp1 that has the following configurations: • Region: West US • Virtual network: VNet1 • VNet integration: Enabled • Outbound subnet: Subnet11 • Windows plan (West US): ASP1 You plan to deploy an Azure web app named WebApp2 that will have the following settings: • Region: West US • VNet integration: Enabled • Windows plan (West US): ASP1 To which subnets can you integrate WebApp2? A. Subnet11 only B. Subnet12 only C. Subnet11 or Subnet12 only D. Subnet12 or Subnet21 only E. Subnet11, Subnet12, or Subnet21 **
** D. Subnet12 or Subnet21 only WebApp2 can only be integrated with Subnet12 or Subnet21. This is because VNet integration in Azure App Service requires that the subnet is empty and not already delegated. Since WebApp1 is already using Subnet11, that subnet is no longer available for integration with WebApp2. Subnet12 and Subnet21 are in different virtual networks (VNet1 and VNet2 respectively) and are therefore not mutually exclusive options. The user comments confirm this through practical testing. **Why other options are incorrect:** * **A. Subnet11 only:** Incorrect because Subnet11 is already in use by WebApp1. VNet integration prevents using a subnet that's already been delegated. * **B. Subnet12 only:** Incorrect because Subnet21 is also a valid option, as long as it's empty and not delegated. * **C. Subnet11 or Subnet12 only:** Incorrect because Subnet11 is unavailable due to prior use by WebApp1. * **E. Subnet11, Subnet12, or Subnet21:** Incorrect because Subnet11 is unavailable for the reasons stated above. **Note:** There is some disagreement in the discussion comments regarding the specific reason why Subnet21 is an option. While the suggested answer and many comments support it, one comment disagrees. However, the prevailing evidence and practical testing support the selection of option D.
54
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112358-exam-az-500-topic-6-question-14-discussion/) You have an Azure subscription. You need to deploy an Azure virtual WAN to meet the following requirements: • Create three secured virtual hubs located in the East US, West US, and North Europe Azure regions. • Ensure that security rules sync between the regions. What should you use? A. Azure Virtual Network Manager B. Azure Front Door C. Azure Network Function Manager D. Azure Firewall Manager **
** D. Azure Firewall Manager **Explanation:** The question requires creating secured virtual hubs across multiple regions with synchronized security rules. Azure Firewall Manager is designed for this purpose. It allows you to centrally manage and apply security policies to multiple Azure firewalls located in different regions, ensuring consistent security across your virtual WAN. The discussion highlights this functionality and links to relevant Microsoft documentation supporting this answer. **Why other options are incorrect:** * **A. Azure Virtual Network Manager:** While Azure Virtual Network Manager helps manage virtual networks, it's not primarily focused on security policy synchronization across multiple regions. The discussion explicitly states that it is not designed for security. * **B. Azure Front Door:** Azure Front Door is a content delivery network (CDN) service. It's not relevant to managing security policies for virtual hubs. * **C. Azure Network Function Manager:** Azure Network Function Manager is for managing network functions, not directly for securing virtual hubs and synchronizing security policies across regions. **Note:** The discussion shows some initial disagreement on the correct answer, with some users initially suggesting Azure Virtual Network Manager. However, the majority consensus and the provided supporting documentation from Microsoft point towards Azure Firewall Manager as the correct solution.
55
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112359-exam-az-500-topic-6-question-12-discussion/) You have an on-premises datacenter. You have an Azure subscription that contains a virtual machine named VM1. VM1 is connected to a virtual network named VNet1. VNet1 is connected to the on-premises datacenter by using a Site-to-Site (S2S) VPN. You plan to create an Azure storage account named storage1 and deploy an Azure web app named App1. You need to ensure that network communication to each resource meets the following requirements: • Connections to App1 must be allowed only from corporate network NAT addresses. • Connections from VNet1 to storage1 must use the Microsoft backbone network. • The solution must minimize costs. What should you configure for each resource? To answer, drag the appropriate components to the correct resources. Each component may be used once, more than once, or not at all. (Image depicting drag and drop question with options: Service Endpoint, Private Endpoint, Private Link, Access Restriction Rules) **
** * **App1 (Azure Web App): Access Restriction Rules** This allows you to specify allowed IP address ranges, restricting access to only the corporate network's NAT addresses. * **Storage1 (Azure Storage Account): Service Endpoint** This enables private communication between VNet1 and storage1 using the Azure backbone network, fulfilling the requirement for minimizing costs and utilizing the Microsoft backbone. **Why other options are incorrect:** * **Private Endpoint/Private Link:** While they provide private connectivity, they are more expensive than Service Endpoints and Access Restriction Rules, violating the cost minimization requirement. The discussion explicitly points out that Private Endpoint and Private Link are more costly. **Note:** The discussion shows a consensus on the correct answer, with multiple users suggesting Service Endpoint for Storage1 and Access Restriction Rules for App1 to minimize costs.
56
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112360-exam-az-500-topic-6-question-18-discussion/) You have an Azure subscription. You plan to implement Azure DDoS Protection. The solution must meet the following requirements: • Provide access to DDoS rapid response support during active attacks. • Protect Basic SKU public IP addresses. You need to recommend which type of DDoS Protection to use for each requirement. What should you recommend? To answer, drag the appropriate DDoS Protection types to the correct requirements. Each DDoS Protection type may be used once, more than once, or not at all. **
** The correct answer is to use **DDoS Network Protection** for both requirements. * **Requirement 1: Provide access to DDoS rapid response support during active attacks.** While the Microsoft documentation suggests DDoS IP Protection *includes* rapid response support, the discussion shows a conflict. Several users correctly identify DDoS Network Protection as the solution. There's disagreement regarding the rapid response support feature, with some claiming it's exclusive to DDoS IP Protection. However, the consensus from the discussion leans towards DDoS Network protection. * **Requirement 2: Protect Basic SKU public IP addresses.** DDoS Network Protection is the correct choice for protecting Basic SKU public IP addresses. This is consistent across the discussion. **Why other options are incorrect (or debated):** The discussion reveals disagreement on whether DDoS IP Protection offers superior rapid response support. While the official documentation may suggest otherwise, the majority of users in the discussion favor DDoS Network Protection for both scenarios. Therefore, solely relying on the documentation to contradict the overwhelming consensus in the discussion is inappropriate within the context of this exam question and answer generation task.
57
[View Question](https://www.examtopics.com/discussions/databricks/view/112366-exam-az-500-topic-7-question-8-discussion/) You have an Azure subscription that contains an Azure Blob storage account named blob1. You need to configure attribute-based access control (ABAC) for blob1. Which attributes can you use in access conditions? A. blob index tags only B. blob index tags and container names only C. file extensions and container names only D. blob index tags, file extensions, and container names
The correct answer is **B. blob index tags and container names only**. Explanation: Based on the provided text and links to Microsoft documentation, Azure Blob Storage's ABAC allows using blob index tags and container names as attributes in access conditions. While other attributes like file extensions, account name, blob path, etc., might exist within the Azure Blob Storage environment, they are not explicitly supported as configurable attributes *within* ABAC access conditions according to the discussion. Why other options are incorrect: * **A. blob index tags only:** This is incorrect because container names are also supported attributes for ABAC in Azure Blob Storage. * **C. file extensions and container names only:** This is incorrect because while container names are supported, file extensions are not explicitly listed as usable attributes within ABAC conditions in the given context. * **D. blob index tags, file extensions, and container names:** This is incorrect because file extensions are not supported as attributes within ABAC conditions for Azure Blob Storage, according to the provided information. Note: The discussion shows a strong consensus on option B being the correct answer, referencing Microsoft documentation.
58
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112368-exam-az-500-topic-7-question-9-discussion/) You have an Azure subscription that contains a storage account and an Azure web app named App1. App1 connects to an Azure Cosmos DB database named Cosmos1 that uses a private endpoint named Endpoint1. Endpoint1 has the default settings. You need to validate the name resolution to Cosmos1. Which DNS zone should you use? A. endpoint1.privatelink.documents.azure.com B. endpoint1.privatelink.blob.core.windows.net C. endpoint1.privatelink.azurewebsites.net D. endpoint1.privatelink.database.azure.com **
** A. `endpoint1.privatelink.documents.azure.com` The correct DNS zone to use for validating name resolution to Cosmos1 is `endpoint1.privatelink.documents.azure.com`. This is because Azure Cosmos DB uses the `privatelink.documents.azure.com` zone by default for private endpoints. The `endpoint1` prefix is added because it's the name given to the specific private endpoint. **Why other options are incorrect:** * **B. `endpoint1.privatelink.blob.core.windows.net`:** This DNS zone is associated with Azure Blob Storage, not Azure Cosmos DB. * **C. `endpoint1.privatelink.azurewebsites.net`:** This DNS zone is associated with Azure Web Apps, not Azure Cosmos DB. * **D. `endpoint1.privatelink.database.azure.com`:** While this might seem plausible, it's not the default or standard zone used by Azure Cosmos DB private endpoints. Azure Cosmos DB uses a specific zone (`documents.azure.com`) to handle its private endpoint DNS resolution. **Note:** The discussion shows overwhelming agreement on the correct answer.
59
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112413-exam-az-500-topic-2-question-103-discussion/) You have an Azure AD tenant that contains three users named User1, User2, and User3. You configure Azure AD Password Protection as shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image681.png) The users perform the following tasks: • User1 attempts to reset her password to C0nt0s0. • User2 attempts to reset her password to F@brikamHQ. • User3 attempts to reset her password to Pr0duct123. Which password reset attempts fail? A. User1 only B. User2 only C. User3 only D. User1 and User 3 only E. User1, User2, and User3 **
** E. User1, User2, and User3 All three password reset attempts will fail because they violate the configured Azure AD Password Protection rules. The image shows that "contoso" and "fabrikam" are in the custom banned words list. Therefore, "C0nt0s0" and "F@brikamHQ" will be rejected. While the explanation regarding User3's password ("Pr0duct123") is less clear, the suggested answer and lab verification from Alexbz support the conclusion that all three attempts fail. The discussion highlights some ambiguity around the specifics of Microsoft's built-in banned word list, but the overall consensus points to all three passwords being rejected. **Why other options are incorrect:** Options A, B, C, and D are incorrect because they only identify a subset of the failed attempts. The consensus in the discussion, supported by lab testing, indicates that all three passwords will be rejected due to violations of the configured password protection rules. There is some debate about the specifics of how "Pr0duct123" is evaluated, but the overall result is the same.
60
[View Question](https://www.examtopics.com/discussions/databricks/view/112414-exam-az-500-topic-2-question-104-discussion/) You have an Azure subscription that uses Azure AD Privileged Identity Management (PIM). A user named User1 is eligible for the Billing administrator role. You need to ensure that the role can only be used for a maximum of two hours. What should you do? A. Create a new access review. B. Edit the role assignment settings. C. Update the end date of the user assignment. D. Edit the role activation settings.
D. Edit the role activation settings. To limit the usage of a privileged role in Azure AD PIM to a maximum of two hours, you need to adjust the role activation settings. These settings control the duration for which a role can be activated. Options A, B, and C are not directly related to controlling the *duration* of a single activation. Access reviews (A) are for periodic checks of role assignments, editing role assignment settings (B) primarily involves who gets the role, and updating the end date (C) sets a final expiration point, not a time limit on a single activation. Note: There is some disagreement in the discussion regarding whether "Edit the role assignment settings" (option B) might indirectly achieve the same result, as the activation settings might be found within the assignment settings. However, based on the provided text, the most direct and correct answer is D, as it explicitly points to the functionality responsible for controlling the activation duration.
61
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112519-exam-az-500-topic-2-question-107-discussion/) You have an Azure subscription linked to an Azure AD tenant named contoso.com. Contoso.com contains a user named User1 and an Azure web app named App1. You plan to enable User1 to perform the following tasks: • Configure contoso.com to use Microsoft Entra Verified ID. • Register App1 in contoso.com. You need to identify which roles to assign to User1. The solution must use the principle of least privilege. Which two roles should you identify? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Authentication Policy Administrator B. Authentication Administrator C. Cloud App Security Administrator D. Application Administrator E. User Administrator **
** A and D: Authentication Policy Administrator and Application Administrator. To configure contoso.com to use Microsoft Entra Verified ID, User1 requires either Global Administrator or **Authentication Policy Administrator** permissions. To register App1 in contoso.com, User1 needs **Application Administrator** permissions. These two roles provide the least privilege necessary to accomplish both tasks. **Why other options are incorrect:** * **B. Authentication Administrator:** While related to authentication, this role likely offers broader permissions than necessary. The Authentication Policy Administrator role is more specific and aligned with the principle of least privilege. * **C. Cloud App Security Administrator:** This role manages security policies for cloud applications, which isn't directly related to configuring Microsoft Entra Verified ID or registering an app. * **E. User Administrator:** This role manages user accounts, but not the configuration of Microsoft Entra Verified ID or app registration. **Note:** There is a disagreement among users regarding the correct answer. One user suggests "Cloud App Administrator" and "Application Administrator", while another correctly identifies "Authentication Policy Administrator" and "Application Administrator" based on Microsoft documentation. The provided answer aligns with the Microsoft documentation cited in the discussion and adheres to the principle of least privilege.
62
[View Question](https://www.examtopics.com/discussions/databricks/view/112520-exam-az-500-topic-2-question-108-discussion/) You have an Azure AD tenant. You plan to implement an authentication solution to meet the following requirements: • Require number matching. • Display the geographical location when signing in. Which authentication method should you include in the solution? A. Microsoft Authenticator B. FIDO2 security key C. SMS D. Temporary Access Pass
A. Microsoft Authenticator Microsoft Authenticator is the only option that fulfills both requirements. It supports number matching (users enter a code displayed in the app), and it can show the geographical location of the sign-in attempt based on the IP address. Why other options are incorrect: * **B. FIDO2 security key:** FIDO2 keys primarily focus on cryptographic authentication and don't inherently provide number matching or geographical location display. * **C. SMS:** SMS-based authentication doesn't offer number matching in the same way as Microsoft Authenticator. While the originating phone number could provide *some* geographical information, it is not a precise location display. * **D. Temporary Access Pass:** Temporary Access Passes are one-time codes used for password resets or account recovery; they do not inherently provide number matching or geographical location information. Note: The provided discussion shows unanimous agreement on the correct answer.
63
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112535-exam-az-500-topic-6-question-11-discussion/) You have an Azure subscription that contains the resources shown in the following table. | Resource Name | Resource Type | |---|---| | LB1 | Load Balancer | | SQL1 | SQL Database | | VMSS1 | Virtual Machine Scale Set | | VM1 | Virtual Machine | You plan to deploy an Azure Private Link service named APL1. Which resource should you reference during the creation of APL1? A. LB1 B. SQL1 C. VMSS1 D. VM1 **
** A. LB1 The correct answer is A, LB1 (Load Balancer). Azure Private Link services are often deployed behind a load balancer to provide high availability and scalability. The Private Link service uses the load balancer's IP address to expose its functionality privately within a virtual network. The other options are incorrect because they don't provide the necessary network infrastructure for a Private Link service to be accessible privately. **Why other options are incorrect:** * **B. SQL1 (SQL Database):** While a SQL Database can be made accessible via Private Link, the question asks which resource to *reference* during *creation*. The Private Link service itself is a separate resource; the SQL Database would be a *target* of the Private Link, not the resource referenced during its creation. * **C. VMSS1 (Virtual Machine Scale Set):** Similar to SQL1, a VMSS can be accessed via Private Link, but it's not the resource used during Private Link service creation. * **D. VM1 (Virtual Machine):** A single VM is not suitable for hosting a Private Link service, especially when considering high availability and scalability requirements. **Note:** The discussion shows a consensus that the answer is A, LB1, with multiple exam takers confirming this choice.
64
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112601-exam-az-500-topic-6-question-16-discussion/) You have an Azure subscription that contains an Azure web app named App1 and a virtual machine named VM1. VM1 runs Microsoft SQL Server and is connected to a virtual network named VNet1. App1, VM1, and VNet1 are in the US Central Azure region. You need to ensure that App1 can connect to VM1. The solution must minimize costs. What should you include in the solution? A. regional virtual network integration B. gateway-required virtual network integration C. Azure Front Door D. Azure Application Gateway integration E. NAT gateway integration **
** A. regional virtual network integration **Explanation:** The question asks for the most cost-effective solution to allow App1 (an Azure Web App) to connect to VM1 (a virtual machine running SQL Server within VNet1). Regional virtual network integration allows the web app to directly access resources within the same virtual network, minimizing costs by avoiding the need for more expensive solutions like gateways or Application Gateways. The discussion confirms this as the correct and most cost-effective approach, citing the removal of the need for an App Service Environment (ASE). **Why other options are incorrect:** * **B. gateway-required virtual network integration:** This would involve more complex networking configurations and increased costs compared to regional integration. * **C. Azure Front Door:** This is a content delivery network (CDN) and is not relevant to connecting a web app to a virtual machine within the same region. * **D. Azure Application Gateway integration:** This is a load balancer and is not needed for simple point-to-point connectivity within the same VNet. It would add unnecessary complexity and cost. * **E. NAT gateway integration:** A NAT gateway is typically used for outbound internet connectivity from a virtual network, not for internal communication within a VNet. **Note:** While the discussion overwhelmingly supports option A, OrangeSG notes that virtual network integration only facilitates outbound calls *from* the app *into* the VNet. However, the overall context of the question and the highly voted responses imply that in this scenario, outbound access from the web app to the SQL Server VM within the same VNet is sufficient to meet the requirement. The suggested answer and several comments support this interpretation.
65
**** [View Question](https://www.examtopics.com/discussions/databricks/view/112602-exam-az-500-topic-6-question-17-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. | Virtual Network Name | Region | Address Space | |---|---|---| | VNet1 | East US | 10.0.0.0/16 | | VNet2 | West US | 10.1.0.0/16 | The subscription contains the subnets shown in the following table. | Subnet Name | VNet Name | Address Space | |---|---|---| | Subnet11 | VNet1 | 10.0.1.0/24 | | Subnet12 | VNet1 | 10.0.2.0/24 | | Subnet21 | VNet2 | 10.1.1.0/24 | | Subnet22 | VNet2 | 10.1.2.0/24 | You plan to create an Azure web app named WebApp2 that will have the following configurations: • Region: East US • VNet integration: Enabled • Scale out: Autoscale to up to 10 instances For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. | Statement | Yes/No | |---|---| | You can integrate WebApp2 with Subnet11. | | | You can integrate WebApp2 with Subnet12. | | | You can integrate WebApp2 with Subnet21. | | **
** N, Y, N 1. **Subnet11:** No. The provided text states Subnet11 contains a VM. VNet integration requires an unused subnet. Alexbz correctly points out the Microsoft documentation's requirement for an unused subnet that is an IPv4 /28 block or larger. 2. **Subnet12:** Yes. Subnet12 is in the same region (East US) as WebApp2 and appears to be unused, fulfilling the requirements for VNet integration. Alexbz explains that a /27 block is large enough. 3. **Subnet21:** No. Subnet21 is in a different region (West US) than WebApp2. Although the discussion mentions the possibility of using different regions, the best practice and likely exam expectation is to use the same region for optimal performance and reduced latency. **Explanation of Disagreement:** There's a minor point of disagreement regarding region selection. While Alexbz and the suggested answer focus on the same-region requirement for optimal performance, Hot_156 mentions Copilot suggesting that different regions are possible, though not recommended. The provided answer prioritizes the best practice (same region) and aligns with the likely exam intent.
66
[View Question](https://www.examtopics.com/discussions/databricks/view/112691-exam-az-500-topic-2-question-106-discussion/) You have an Azure subscription that contains a user named User1 and a storage account that hosts a blob container named blob1. You need to grant User1 access to blob1. The solution must ensure that the access expires after six days. What should you use? A. a shared access signature (SAS) B. role-based access control (RBAC) C. a shared access policy D. a managed identity
A. a shared access signature (SAS) A Shared Access Signature (SAS) is the correct answer because it allows granting temporary access to Azure Storage resources, including blob containers, for a specified time. You can configure the SAS to expire after six days, fulfilling the requirement. SAS tokens provide fine-grained control over access permissions (read, write, delete) and are ideal for scenarios requiring short-lived, time-limited access. Why other options are incorrect: * **B. role-based access control (RBAC):** RBAC grants permissions based on roles, not time-limited access. While you could assign a role with appropriate permissions, it wouldn't inherently expire after six days. * **C. a shared access policy:** Shared access policies are used to define permissions and are often used *with* SAS tokens. They don't directly provide the time-limited access feature. * **D. a managed identity:** Managed identities are used for services to access other Azure resources. This is not relevant to granting a specific user (User1) temporary access to a blob container. Note: The discussion shows unanimous agreement on the correct answer.
67
**** [View Question](https://www.examtopics.com/discussions/databricks/view/11303-exam-az-500-topic-2-question-12-discussion/) Your company plans to create separate subscriptions for each department. Each subscription will be associated to the same Azure Active Directory (Azure AD) tenant. You need to configure each subscription to have the same role assignments. What should you use? A. Azure Security Center B. Azure Policy C. Azure AD Privileged Identity Management (PIM) D. Azure Blueprints **
** D. Azure Blueprints Azure Blueprints are the correct answer because they allow you to define and deploy a consistent set of resources across multiple subscriptions. This includes role assignments, which is the specific requirement in the question. A blueprint can be designed to define the desired state of resources, including role assignments, and then deployed to different subscriptions. This ensures consistency in configurations and access controls across those subscriptions, addressing the requirement of having the same role assignments in each department's subscription. **Why other options are incorrect:** * **A. Azure Security Center:** Focuses on security monitoring and threat protection, not on deploying consistent resource configurations or role assignments across subscriptions. * **B. Azure Policy:** Can manage resource compliance and configurations, but it's not primarily designed for deploying pre-defined sets of resources like blueprints do. While policies can *influence* role assignments, they don't directly deploy them consistently like blueprints. * **C. Azure AD Privileged Identity Management (PIM):** Manages privileged access within Azure AD, it doesn't directly manage resource deployments or role assignments in Azure subscriptions. **Note:** The discussion highlights a conflict with a similar question (Question 13) with a potentially different answer. This suggests some ambiguity or potential error in the original question source. The overwhelming consensus in the provided discussion supports Azure Blueprints as the correct answer for this specific question, however.
68
**** [View Question](https://www.examtopics.com/discussions/databricks/view/113083-exam-az-500-topic-4-question-113-discussion/) You are troubleshooting a security issue for an Azure Storage account. You enable Azure Storage Analytics logs and archive it to a storage account. What should you use to retrieve the diagnostics logs? A. the Microsoft 365 Defender portal B. SQL query editor in Azure C. Azure Monitor D. Azure Storage Explorer **
** D. Azure Storage Explorer **Explanation:** Azure Storage Explorer is the tool specifically designed to interact with and manage Azure Storage accounts, including accessing logs stored within those accounts. Since the Storage Analytics logs are archived to a storage account, Azure Storage Explorer provides the direct method for retrieving them. **Why other options are incorrect:** * **A. the Microsoft 365 Defender portal:** This portal focuses on security threats within Microsoft 365 services, not directly on Azure Storage diagnostics. * **B. SQL query editor in Azure:** While Azure uses SQL databases, Storage Analytics logs are not directly stored and queried as SQL databases. A different method of access is required. * **C. Azure Monitor:** Azure Monitor can collect and analyze logs from various Azure services. However, the question specifically states the logs are *archived to a storage account*. While Azure Monitor *could* be configured to ingest these logs, it's not the most direct or efficient method for retrieval when the logs are already directly in a storage account accessible through Storage Explorer. The discussion shows disagreement on this point, with one user claiming Azure Monitor is the correct answer based on Microsoft Copilot's response. **Note:** There is disagreement in the provided discussion regarding the correct answer. While Azure Storage Explorer is the most direct method for accessing logs already in a storage account, some argue that Azure Monitor's ability to collect and analyze logs makes it a suitable (though less direct) option.
69
**** [View Question](https://www.examtopics.com/discussions/databricks/view/113127-exam-az-500-topic-2-question-105-discussion/) You have an Azure subscription that contains a user named User1 and a storage account named storage1. The storage1 account contains the resources shown in the following table. **(Image 1: Table showing container names: blobs, tables, files)** User1 is assigned the following roles for storage1: • Storage Blob Data Reader • Storage Table Data Contributor • Storage File Data SMB Share Reader In storage1, you create a shared access signature (SAS) named SAS1 as shown in the following exhibit. **(Image 2: Image showing SAS details. Text within the image is not fully legible but indicates permissions related to file deletion.)** For each of the following statements, select Yes if the statement is true. Otherwise, select No. **(Image 3: Three statements. The exact wording of the statements is not provided in the context, but based on the discussion, they are related to User1's ability to perform actions (delete files, delete tables, access tables) using the provided SAS and/or RBAC roles.)** Show Suggested Answer Hide Answer Suggested Answer: **(Image 4: Suggested answer shows Y, N, N )** **
** The correct answer is Y, N, N. * **Statement 1 (Y):** User1 can delete files using SAS1. The discussion indicates that the SAS (SAS1) grants file delete permissions, overriding the limitations imposed by the RBAC roles assigned to User1. The SAS permissions take precedence. * **Statement 2 (N):** User1 cannot delete tables. The RBAC roles grant User1 only "Storage Table Data Contributor" access, which does not include delete permissions. The SAS grants no table access. Therefore, User1 lacks the necessary permissions to delete tables. * **Statement 3 (N):** User1 cannot access tables using only the SAS. The SAS only provides file-related permissions; it does not grant access to tables. While the user has table contributor access via RBAC, the question specifically asks about access *using the SAS*. **Why other options are incorrect:** Any answer deviating from Y, N, N would be incorrect based on the provided information and the consensus in the discussion. The discussion shows some initial disagreement about the interaction of SAS and RBAC permissions but ultimately converges to this solution. Note that the exact wording of the statements is not available, making a more precise explanation of other potential incorrect options impossible.
70
[View Question](https://www.examtopics.com/discussions/databricks/view/113204-exam-az-500-topic-4-question-109-discussion/) You have an Azure subscription that uses Microsoft Defender for Cloud. You have an Amazon Web Services (AWS) account. You need to ensure that when you deploy a new AWS Elastic Compute Cloud (EC2) instance, the Microsoft Defender for Servers agent installs automatically. What should you configure first? A. the classic cloud connector B. the Azure Monitor agent C. the Log Analytics agent D. the native cloud connector
D. the native cloud connector Explanation: To automatically install the Microsoft Defender for Servers agent on new AWS EC2 instances, you need to first configure the native cloud connector with Microsoft Defender for Cloud. The native connector is the recommended and newer method for integrating AWS with Defender for Cloud, offering improved onboarding, multi-account support, better performance, and more comprehensive security coverage compared to the classic connector. The classic connector is being phased out. The Azure Monitor agent and Log Analytics agent are not directly involved in deploying the Defender for Servers agent to AWS EC2 instances. Why other options are incorrect: * **A. the classic cloud connector:** This is an older, less efficient method and is being phased out in favor of the native connector. * **B. the Azure Monitor agent:** The Azure Monitor agent is for collecting monitoring data, not directly for deploying security agents to AWS. * **C. the Log Analytics agent:** Similar to Azure Monitor agent, this is for data collection and not directly related to the deployment of the security agent. Note: The discussion shows unanimous agreement on the correct answer.
71
**** [View Question](https://www.examtopics.com/discussions/databricks/view/113317-exam-az-500-topic-6-question-15-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image697.png) VNet1 connects to a remote site by using a Site-to-Site (S2S) VPN that uses forced tunneling. VNet1 contains the subnets shown in the following table. ![Image](https://img.examtopics.com/az-500/image698.png) The SQL subnet contains SQL1. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image699.png) **
** 1. **Yes:** To restrict inbound traffic to SQL1, you must modify an access rule in NSG1. The Network Security Group (NSG) associated with the subnet (SQL subnet) controls inbound and outbound traffic for the virtual machines within that subnet. Therefore, to restrict traffic to SQL1, you need to configure the appropriate rules within NSG1. 2. **Yes:** To enable VM1 to access storage1 using the Microsoft backbone network, you must enable a service endpoint on the Default subnet. A service endpoint provides private connectivity to Azure services without traversing the public internet. Enabling a service endpoint for Microsoft.Storage on the Default subnet ensures that VM1's traffic to storage1 remains within the Azure backbone network, enhancing security and performance. The S2S VPN is irrelevant to this internal Azure communication. 3. **No:** You cannot deploy an App Service Environment (ASE) to the Default subnet. ASEs typically require a dedicated subnet to isolate them from other resources. Sharing a subnet with other resources can lead to conflicts and affect performance and security of the ASE. **Explanation of why other options are incorrect:** The provided discussion and suggested answers support the above answers. The reasoning for each answer is clearly explained above. The disagreement in the discussion is minimal and only concerns minor details in the explanation, not the core answers themselves.
72
**** [View Question](https://www.examtopics.com/discussions/databricks/view/12262-exam-az-500-topic-3-question-37-discussion/) HOTSPOT - You have two Azure virtual machines in the East US 2 region as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0027600001.png) You deploy and configure an Azure Key vault. You need to ensure that you can enable Azure Disk Encryption on VM1 and VM2. What should you modify on each virtual machine? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0027700001.jpg) **
** To enable Azure Disk Encryption on VM1 and VM2, the following modifications are needed: * **VM1:** Upgrade the Tier from "Basic" to "Standard". Azure Disk Encryption is not available on Basic tier A-series VMs. * **VM2:** Change the VM type to a supported size. While the provided text initially suggested that only Generation 1 VMs were supported, later comments and links confirm that Generation 2 VMs *are* supported. The L4s VM size used in VM2 is now confirmed to be supported. **Why other options are incorrect:** The discussion highlights some initial confusion regarding supported VM generations and sizes. Some answers initially excluded Generation 2 VMs, but later comments corrected this and confirmed their compatibility. The operating system version of VM2 (Ubuntu 16.04) is supported, so that is not a factor requiring modification. The key is understanding that the limitations are primarily based on the VM tier (Basic vs. Standard) and the VM size's compatibility with Azure Disk Encryption (in this specific case, A-series Basic is not supported, and initial concerns about Generation 2 were later proven to be incorrect). **Note:** The discussion reveals some disagreement and evolving information regarding supported VM types and generations for Azure Disk Encryption. The final answer reflects the consensus reached in later posts and updated information.
73
[View Question](https://www.examtopics.com/discussions/databricks/view/12590-exam-az-500-topic-3-question-40-discussion/) You have an Azure subscription. The subscription contains 50 virtual machines that run Windows Server 2012 R2 or Windows Server 2016. You need to deploy Microsoft Antimalware to the virtual machines. Solution: You add an extension to each virtual machine. Does this meet the goal? A. Yes B. No
A. Yes Adding an extension to each virtual machine does meet the goal of deploying Microsoft Antimalware. However, the discussion highlights that this is not the *most efficient* or *recommended* approach for managing 50 VMs. Better approaches suggested include using Azure Policy to automatically deploy the extension to new VMs and/or using ARM templates for deployment. The question only asks if the proposed solution *meets the goal*, which it does. Why other options are incorrect: B. No - This is incorrect because adding the extension to each VM *does* deploy the antimalware software, fulfilling the stated requirement. The discussion points out better methods, but doesn't negate the fact that the solution in the question works. Note: There is a disagreement in the discussion regarding the *best* solution. While adding an extension to each VM works, the discussion suggests using Azure Policy or ARM templates as more scalable and efficient alternatives for managing a larger number of virtual machines.
74
[View Question](https://www.examtopics.com/discussions/databricks/view/130344-exam-az-500-topic-7-question-5-discussion/) You need to ensure that the rg1lod28681041n1 Azure Storage account is encrypted by using a key stored in the KeyVault28681041 Azure key vault. To complete this task, sign in to the Azure portal. ![Image](https://img.examtopics.com/az-500/image676.png)
The provided image shows steps to encrypt an Azure Storage account using a key from a Key Vault. The steps involve navigating to the Storage account in the Azure portal, selecting "Security + networking," then "Encryption," and finally configuring encryption using a customer-managed key from the specified Key Vault. However, this process is considered outdated according to user feedback. Crucially, the user performing these steps needs the Key Vault Crypto Officer role to create the key (if not already granted). Additionally, a System-Assigned identity of the Storage account can be used, not just a User-Assigned identity, and the correct permission to grant the storage identity is "Key Vault Crypto Service Encryption User" for wrap/unwrap permissions. Why other options are incorrect: There are no other options provided in the original question. The answer provided clarifies the shortcomings and outdated aspects of the suggested solution displayed in the image. The discussion highlights necessary clarifications and updates to the original procedure.
75
[View Question](https://www.examtopics.com/discussions/databricks/view/130854-exam-az-500-topic-4-question-115-discussion/) You have an Azure subscription. You plan to deploy Microsoft Defender External Attack Surface Management (Defender EASM) to identify and monitor externally facing assets. You create a new Defender EASM instance named EASM1. What should you do next? A. Create a custom attack surface. B. Add a Log Analytics workspace. C. Add a discovery group. D. Import seeds from an organization.
The correct answer is **D. Import seeds from an organization.** Defender EASM uses "seeds" – known assets – as starting points to recursively discover other externally facing assets. Importing seeds is the crucial first step in building a comprehensive attack surface map. While discovery groups and custom attack surfaces are relevant, they follow the initial seed import. Adding a Log Analytics workspace is necessary for data storage and analysis but is not the immediate next step after creating the EASM instance. Why other options are incorrect: * **A. Create a custom attack surface:** Custom attack surfaces are created *after* you've discovered assets through seed import. They allow for focusing on specific parts of the attack surface. * **B. Add a Log Analytics workspace:** While essential for data analysis, this is not the first step. You need data (from seeds) before you can send it to a workspace. * **C. Add a discovery group:** Discovery groups help organize the discovery process. However, you need initial seeds *before* grouping them. Note: The discussion shows some disagreement on the precise order of operations. Some users suggest creating a discovery group before importing seeds. However, the generally accepted best practice and the most logical starting point for a comprehensive scan is to import seeds first to begin the discovery process.
76
[View Question](https://www.examtopics.com/discussions/databricks/view/130857-exam-az-500-topic-4-question-116-discussion/) You have an Azure subscription that contains an Azure Key Vault Standard key vault named Vault1. Vault1 hosts a 2048-bit RSA key named key1. You need to ensure that key1 is rotated every 90 days. What should you do first? A. Create a key rotation policy. B. Modify the Access policies settings of Vault1. C. Upgrade Vault1 to Key Vault Premium. D. Recreate key1 as an EC key.
The correct answer is **A. Create a key rotation policy.** While the discussion shows some disagreement, the most widely supported and technically accurate answer is to create a key rotation policy. This is the first step in automating key rotation. However, it's important to note that the feasibility of this option depends on the Key Vault type. Some users pointed out that automatic key rotation policies are not available for Standard Key Vaults, suggesting that option C (upgrading to Premium) might be necessary for automatic rotation. Therefore, the answer is contingent on the assumption that automatic key rotation is supported for the existing vault type. Why other options are incorrect: * **B. Modify the Access policies settings of Vault1:** Access policies control who can access the key, not the rotation schedule. * **C. Upgrade Vault1 to Key Vault Premium:** This might be necessary if automatic key rotation isn't supported in the Standard tier, but it's not the *first* step. Creating a rotation policy is the first step, even if it necessitates a subsequent upgrade to a Key Vault that supports the policy. * **D. Recreate key1 as an EC key:** This doesn't address the automation of key rotation; it simply changes the key type. Note: The discussion reveals some disagreement regarding the applicability of creating a key rotation policy to a Standard Key Vault. The correct answer hinges on the underlying assumption of the Key Vault type's capabilities. If the Standard Key Vault does not support automated rotation policies, then option C would become a necessary prerequisite.
77
**** [View Question](https://www.examtopics.com/discussions/databricks/view/130859-exam-az-500-topic-4-question-118-discussion/) You have two Azure subscriptions named Sub1 and Sub2. Sub1 contains a resource group named RG1 and an Azure policy named Policy1. You need to remediate the non-compliant resources in Sub1 based on Policy1. How should you complete the PowerShell script? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. The image shows a drag-and-drop question with blanks to fill in a PowerShell script involving `Start-AzPolicyRemediation`. The blanks are for: * `-PolicyAssignmentId` * `-Scope` * `-RemediationName` **
** The correct answer, as validated by multiple users in the discussion, involves using the `Start-AzPolicyRemediation` cmdlet with the following parameters: * `-PolicyAssignmentId`: This should be the ID of Policy1. This parameter specifies the policy assignment whose non-compliant resources need remediation. * `-Scope`: This should be `/subscriptions//resourceGroups/RG1`. This parameter defines the scope of the remediation; in this case, it's limited to the resource group RG1 within subscription Sub1. You would replace `` with the actual subscription ID. * `-RemediationName`: This can be any unique name for this remediation. This parameter assigns a name to the remediation process for tracking and management. The exact values for `-PolicyAssignmentId` and `` are not provided in the question, but the correct approach requires populating these parameters correctly based on the environment. **Why other options are incorrect:** The question is a drag-and-drop, and no other options are explicitly provided in the given text. The correctness of the answer hinges on using the correct parameters within the `Start-AzPolicyRemediation` cmdlet. Using incorrect parameters or omitting them will result in the script failing to remediate the resources. **Note:** The discussion shows unanimous agreement amongst users that the suggested answer (using `Start-AzPolicyRemediation` with the described parameters) is correct. The provided links to Microsoft documentation further confirm this approach.
78
**** [View Question](https://www.examtopics.com/discussions/databricks/view/130860-exam-az-500-topic-6-question-19-discussion/) You have an Azure subscription that contains a virtual network named VNet1. VNet1 contains a single subnet. The subscription contains a virtual machine named VM1 that is connected to VNet1. You plan to deploy an Azure SQL managed instance named SQL1. You need to ensure that VM1 can access SQL1. Which three components should you create? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. a subnet B. a network security perimeter C. a virtual network gateway D. a network security group (NSG) E. a route table **
** A, D, and E (a subnet, a network security group (NSG), and a route table). **Explanation:** To enable VM1 to access SQL1, you need these three components: * **A. a subnet:** SQL Managed Instance requires its own dedicated subnet within the same virtual network as VM1. This subnet must be delegated to "Microsoft.Sql/managedInstances". This isolates the SQL instance and improves security. * **D. a network security group (NSG):** An NSG controls network traffic to and from the SQL Managed Instance subnet. You'll need to configure rules within the NSG to allow inbound traffic from VM1 on the necessary ports (typically port 1433 for SQL Server). This ensures only authorized traffic can reach SQL1. * **E. a route table:** A route table is needed to direct network traffic between VM1 and the subnet hosting SQL1. Without proper routing, VM1 won't be able to find SQL1. **Why other options are incorrect:** * **B. a network security perimeter:** While related to security, a network security perimeter is a broader concept and not a specific Azure component required for this scenario. An NSG fulfills the necessary security function. * **C. a virtual network gateway:** A virtual network gateway is used for connecting to other networks (e.g., on-premises or other Azure VNets). This is not needed for communication within the same VNet. The discussion shows unanimous agreement on the correct answer (A, D, E).
79
**** [View Question](https://www.examtopics.com/discussions/databricks/view/130892-exam-az-500-topic-5-question-72-discussion/) You have an Azure subscription named Sub1 that contains the storage accounts shown in the following table. | Name | Location | Encryption | |-------------|-----------|-------------| | storage1 | WestUS | None | | storage2 | EastUS | None | | storage3 | WestUS | Customer managed keys | The storage3 storage account is encrypted by using customer-managed keys. You need to enable Microsoft Defender for Storage to meet the following requirements: • The storage1 and storage2 accounts must be included in the Defender for Storage protections. • The storage3 account must be excluded from the Defender for Storage protections. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area. (The list of actions image is not provided here, but the suggested answer is provided below as an image) **
** The correct sequence of actions is to first enable Microsoft Defender for Storage at the subscription level, then tag storage1 and storage2 to include them in the protection, and finally exclude storage3 from the protection. The exact actions would be presented visually in a drag-and-drop interface in the original exam question, however, the provided image of the suggested answer shows the order. The order matters because you must first enable the service at the subscription level before you can apply inclusion or exclusion policies to individual storage accounts within that subscription. Tagging allows for selective inclusion, and explicit exclusion prevents the storage account from being protected. **Why other options are incorrect:** There is no other suggested answer. The discussion shows agreement on the suggested answer provided in the image (Image717.png). While several users provided links to Microsoft documentation supporting the solution, none suggested alternative approaches or contradictory steps.
80
[View Question](https://www.examtopics.com/discussions/databricks/view/130927-exam-az-500-topic-2-question-109-discussion/) Your network contains an on-premises Active Directory Domain Services (AD DS) domain that syncs with an Azure AD tenant. You plan to implement single sign-on (SSO) for Azure AD resources. You need to configure an Intranet Zone setting for all users by using a Group Policy Object (GPO). Which setting should you configure? A. Logon options B. Allow updates to status bar via script C. Allow active scripting D. Access data sources across domains
The correct answer is **B. Allow updates to status bar via script**. The provided discussion links to Microsoft documentation which indicates that configuring the "Allow updates to status bar via script" setting within the Intranet Zone of a Group Policy Object is necessary for seamless single sign-on (SSO) functionality with Azure AD Connect. This setting allows the necessary scripts and processes to update the status bar and facilitate the SSO login experience. Why other options are incorrect: * **A. Logon options:** While logon options are relevant to authentication, they don't directly address the specific requirement of configuring the Intranet Zone for SSO functionality. * **C. Allow active scripting:** While potentially relevant to some aspects of SSO, it's not the primary or specific Intranet Zone setting identified in the supporting documentation as crucial for enabling SSO. * **D. Access data sources across domains:** This setting pertains to data access permissions and is unrelated to the configuration of the Intranet Zone for SSO. Note: The discussion strongly supports option B as the correct answer, referencing Microsoft documentation. There is no expressed disagreement within the provided context.
81
[View Question](https://www.examtopics.com/discussions/databricks/view/130928-exam-az-500-topic-2-question-110-discussion/) You have an Azure AD tenant that contains the groups shown in the following table. (See image707.png) You assign licenses to the groups as shown in the following table. (See image708.png) On May 1, you delete Group1, Group2, and Group3. For each of the following statements, select Yes if the statement is true. Otherwise, select No. (See image709.png)
The provided images (image707.png, image708.png, and image709.png) are crucial to answering this question, and are not included here. However, based on the discussion, the answer likely involves determining the restorability of each group (Group1, Group2, and Group3) based on their type (security group vs. O365 group) and the time elapsed since deletion. The suggested answer (image710.png) would contain the correct Yes/No responses for each statement in image709.png. Group 1 (a security group) cannot be restored. Group 2 (an O365 group) *can* be restored, provided the soft-delete period hasn't expired. Group 3's restorability depends on whether the 30-day soft-delete period has passed. The final answer will depend on the specific statements in image709.png. The discussion indicates general agreement on the answer, but lacks specifics. Therefore, a precise answer requires the visual information from the missing images.
82
[View Question](https://www.examtopics.com/discussions/databricks/view/130929-exam-az-500-topic-2-question-111-discussion/) You have an Azure AD tenant. You need to ensure that users cannot create passwords containing a variation of the word "contoso". What should you configure? A. Microsoft Entra Verified ID B. Microsoft Entra Identity Governance C. Azure AD Privileged Identity Management (PIM) D. Azure AD Password Protection E. Azure AD Identity Protection
The correct answer is **D. Azure AD Password Protection**. Azure AD Password Protection allows administrators to define custom banned passwords, including variations of specific words like "contoso." This prevents users from creating passwords that are easily guessable or based on organization-specific information. The feature detects and blocks known weak passwords and their variants, and can be configured to block additional weak terms specific to your organization. Why other options are incorrect: * **A. Microsoft Entra Verified ID:** This service focuses on verifying the identity of users and issuing verifiable credentials, not on password policy enforcement. * **B. Microsoft Entra Identity Governance:** This service handles access governance and lifecycle management, not directly password creation restrictions. * **C. Azure AD Privileged Identity Management (PIM):** This service manages privileged accounts and access, not general password policies. * **E. Azure AD Identity Protection:** This service detects and remediates potential identity-related threats, but doesn't directly control password creation rules. Note: The discussion shows unanimous agreement that Azure AD Password Protection is the correct answer.
83
[View Question](https://www.examtopics.com/discussions/databricks/view/130938-exam-az-500-topic-6-question-22-discussion/) You have an Azure subscription. You create an Azure Firewall policy that has the rules shown in the following table. In which order should the rules be processed? To answer, move all rules from the list of rules to the answer area and arrange them in the correct order. (Image shows a table with rules numbered 1-5 and their type: DNAT, Network, Application, Network, Application. Priorities are also listed: 100, 200, 150, 250, 120. )
The correct order is Rule 2, Rule 4, Rule 3, Rule 5, Rule 1. Azure Firewall processes rules in the following order of precedence: DNAT (Destination NAT), Network, then Application rules. Within each rule type, the rules are processed based on their priority, with lower priority numbers processed first. Therefore: 1. **Rule 2 (DNAT, Priority 100):** This is processed first because DNAT rules have the highest precedence. 2. **Rule 4 (Network, Priority 250):** Next, Network rules are processed, and Rule 4 has the lowest priority within the Network rules (250). 3. **Rule 3 (Network, Priority 150):** Processed after Rule 4 because it has a higher priority (lower number is higher priority). 4. **Rule 5 (Application, Priority 120):** Application rules are processed last, and Rule 5 has the lowest priority within those. 5. **Rule 1 (Application, Priority 200):** This is processed last because of its higher priority (lower number is higher priority). Several users in the discussion agree on this order. However, there's slight disagreement about the exact reasoning, but the final consensus points to the above answer.
84
**** [View Question](https://www.examtopics.com/discussions/databricks/view/131107-exam-az-500-topic-6-question-20-discussion/) You are implementing an Azure Application Gateway web application firewall (WAF) named WAF1. You have the following Bicep code snippet. ``` resource waf 'Microsoft.Network/applicationGateways/webApplicationFirewallPolicies@2023-08-01' = { name: 'waf1' location: resourceGroup().location properties: { policySettings: { mode: 'Detection' maxRequestHeaderSize: 16384 } } } ``` For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. | Statement | Yes/No | |------------------------------------------------------------------------------|---------| | If a request is blocked by a custom rule, WAF1 takes no action. | | | The maximum file upload size for WAF1 is 2 GB. | | | If WAF1 is in detection mode, requests that trigger OWASP rule 3.2 are blocked | | **
** YNY * **Statement 1: Yes.** If the WAF policy is in Detection mode, as shown in the Bicep code (`mode: 'Detection'`), any rule triggering (including custom rules) will be logged but will not result in the request being blocked. This is consistent with the provided Microsoft documentation. * **Statement 2: No.** While some documentation mentions a 2GB limit, other sources indicate that the limit depends on the Application Gateway SKU version (V1 = 2GB, V2 = 4GB). The provided code snippet doesn't specify the SKU, making a definitive 2GB statement incorrect. * **Statement 3: No.** In Detection mode, the WAF logs events but doesn't block requests. Therefore, even if an OWASP rule 3.2 is triggered, the request won't be blocked; it will only be logged. **Why other options are incorrect:** The discussion highlights disagreement regarding the file upload limit size. The answer reflects this uncertainty by acknowledging that the 2GB limit is not universally confirmed depending on the SKU. The consensus is that in Detection mode, no blocking action takes place regardless of the triggered rule.
85
**** [View Question](https://www.examtopics.com/discussions/databricks/view/131109-exam-az-500-topic-6-question-21-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-500/image721.png) *(Image contains a table showing two Virtual Networks: VNET1 and VNET2)* NSG1 and NSG2 both have default rules only. The subscription contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-500/image722.png) *(Image contains a table showing VM1 in VNET1 and VM2 in VNET2)* The subscription contains the web apps shown in the following table. ![Image](https://img.examtopics.com/az-500/image723.png) *(Image contains a table showing WebApp1 in VNET1)* For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image724.png) *(Image contains three statements: 1. NSG1 controls inbound traffic to WebApp1. 2. Virtual Network integration of WebApp1 allows inbound traffic from VM2. 3. Virtual Network integration of WebApp1 allows outbound traffic to peered VNets.)* **
** YNY * **Statement 1: NSG1 controls inbound traffic to WebApp1. YES.** Since WebApp1 is integrated with VNET1, and NSG1 is associated with VNET1, NSG1 will control inbound traffic to WebApp1. Inbound traffic to the web app would pass through the NSG. * **Statement 2: Virtual Network integration of WebApp1 allows inbound traffic from VM2. NO.** Virtual network integration primarily affects *outbound* traffic from the web app. It does *not* provide a mechanism for inbound private access. Inbound traffic to WebApp1 from VM2 would need additional configuration (e.g., explicitly allowing traffic in NSG1) to be successful. * **Statement 3: Virtual Network integration of WebApp1 allows outbound traffic to peered VNets. YES.** Virtual Network integration allows WebApp1 to reach resources within the peered VNET2 (if appropriate routing is also configured). This is outbound traffic from the WebApp. **Why other options are incorrect:** The discussion shows some disagreement on the interpretation of virtual network integration, specifically on whether it affects inbound traffic. The consensus (and technically correct) understanding is that virtual network integration primarily deals with outbound connectivity for the web app. Therefore, options suggesting that inbound traffic is directly controlled or affected by the integration would be incorrect. The suggested answer YNN from some users was due to a misunderstanding of the VNET integration's function.
86
[View Question](https://www.examtopics.com/discussions/databricks/view/131464-exam-az-500-topic-4-question-117-discussion/) You have an Azure subscription named Sub1 that has Security defaults disabled. The subscription contains the following users: • Five users that have owner permissions for Sub1. • Ten users that have owner permissions for Azure resources. None of the users have multi-factor authentication (MFA) enabled. Sub1 has the secure score as shown in the Secure Score exhibit. (Click the Secure Score tab.) ![Image](https://img.examtopics.com/az-500/image711.png) You plan to enable MFA for the following users: • Five users that have owner permission for Sub1. • Five users that have owner permissions for Azure resources. By how many points will the secure score increase after you perform the planned changes? A. 0 B. 5 C. 7.5 D. 10 E. 14
C. 7.5 The provided image is missing, preventing a definitive calculation based solely on the provided information. However, the suggested answer and user comments indicate a calculation method: The secure score increase is determined by considering the number of users enabled for MFA and a presumed score per user. The users with owner permissions for Sub1 contribute a higher score than those with owner permissions for Azure resources. The explanation provided by user wingcheuk suggests that enabling MFA for five Sub1 owners results in a 5-point increase (5 users * 1 point/user), while enabling MFA for five Azure resource owners results in a 2.5-point increase (5 users * 0.5 points/user), leading to a total increase of 7.5 points (5 + 2.5 = 7.5). There's disagreement among users about the source and validity of these numbers, highlighting the need for the image showing the actual secure score. Without the image, a precise calculation is impossible. Other options are incorrect because they don't reflect the suggested calculation method and user discussions. The lack of the image makes definitive rejection of these options difficult, but based on the discussed calculation, they're unlikely to be correct. The discussion shows disagreement about the underlying scoring methodology; the answer is provided based on the most upvoted and seemingly most logical explanation in the discussion thread.
87
**** [View Question](https://www.examtopics.com/discussions/databricks/view/132333-exam-az-500-topic-5-question-71-discussion/) You have an Azure AD tenant that contains the users shown in the following table. ![Image](https://img.examtopics.com/az-500/image714.png) *(Note: The image is not provided, but the question references a table showing Azure AD users.)* You need to ensure that the users cannot create app passwords. The solution must ensure that User1 can continue to use the Mail and Calendar app. What should you do? A. Assign User1 the Authentication Policy Administrator role. B. Enable Azure AD Password Protection. C. Configure a multi-factor authentication (MFA) registration policy. D. Create a new app registration. E. From multi-factor authentication, configure the service settings. **
** E. From multi-factor authentication, configure the service settings. The correct answer is E because disabling the ability for users to create app passwords is a setting within the Multi-Factor Authentication (MFA) service settings in Azure AD. By configuring this setting to "Do not allow users to create app passwords to sign in to non-browser apps," you prevent users from generating app passwords while still allowing User1 to access Mail and Calendar, which likely supports modern authentication methods. Options A, B, C, and D do not directly control the creation of app passwords. **Why other options are incorrect:** * **A. Assign User1 the Authentication Policy Administrator role:** This grants User1 administrative privileges, not preventing app password creation. * **B. Enable Azure AD Password Protection:** This helps enforce strong passwords but doesn't directly prevent app password creation. * **C. Configure a multi-factor authentication (MFA) registration policy:** MFA policies enforce MFA but don't inherently control app password creation. * **D. Create a new app registration:** Creating a new app registration is irrelevant to preventing users from generating app passwords. **Note:** The discussion shows a consensus supporting option E as the correct answer.
88
[View Question](https://www.examtopics.com/discussions/databricks/view/13672-exam-az-500-topic-5-question-27-discussion/) You have an Azure subscription that contains a virtual machine named VM1. You create an Azure key vault that has the following configurations: ✑ Name: Vault5 ✑ Region: West US ✑ Resource group: RG1 You need to use Vault5 to enable Azure Disk Encryption on VM1. The solution must support backing up VM1 by using Azure Backup. Which key vault settings should you configure? A. Access policies B. Secrets C. Keys D. Locks
A. Access policies Azure Disk Encryption requires specific permissions to access and utilize the key vault for encryption and decryption operations. Access policies define which entities (users, applications, services) have what permissions within the key vault. To enable Azure Disk Encryption and ensure Azure Backup can work correctly, appropriate access policies must be configured granting the necessary permissions to the VM and the backup service. Why other options are incorrect: * **B. Secrets:** Secrets are data stored within the key vault, not permissions to access it. While secrets *might* be used by the encryption process, the core issue here is access control, not the data itself. * **C. Keys:** Keys are the cryptographic components used for encryption. While the key vault holds the keys, you still need access policies to define who can access and use them for encryption and decryption. * **D. Locks:** Locks control access at a key vault level, which is too broad. The access needs to be more granular, allowing the backup service and VM access, rather than just a complete lock or unlock. Note: Some discussion comments suggest that "Access configuration" might be a more precise term than "Access policies." However, within the provided context of the multiple-choice question, "Access policies" is the most appropriate and accurate answer.
89
[View Question](https://www.examtopics.com/discussions/databricks/view/13674-exam-az-500-topic-5-question-12-discussion/) You have an Azure web app named webapp1. You need to configure continuous deployment for webapp1 by using an Azure Repo. What should you create first? A. an Azure Application Insights service B. an Azure DevOps organization C. an Azure Storage account D. an Azure DevTest Labs lab
The correct answer is B. To use Azure Repos for continuous deployment, you must first have an Azure DevOps organization linked to your Azure subscription. Azure Repos is a part of Azure DevOps, and the organization provides the necessary infrastructure and services for version control, CI/CD pipelines, and other collaboration features. Without an Azure DevOps organization, you can't utilize Azure Repos. Options A, C, and D are incorrect because they are not prerequisites for setting up continuous deployment using Azure Repos. Application Insights is for monitoring, Azure Storage can be used for various purposes, but isn't directly involved in the initial setup of continuous deployment via Azure Repos, and DevTest Labs are for development and testing environments. Note: While the discussion shows some disagreement on whether this type of question belongs on a security-focused exam, the consensus among users is that an Azure DevOps organization is a necessary prerequisite for using Azure Repos in continuous deployment.
90
[View Question](https://www.examtopics.com/discussions/databricks/view/13684-exam-az-500-topic-3-question-15-discussion/) HOTSPOT - You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016. You need to implement a policy to ensure that each virtual machine has a custom antimalware virtual machine extension installed. How should you complete the policy? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0024300001.png) *(Image shows a screenshot of a policy definition with two boxes to fill in: "Effect" and "Details")*
The correct selections are: **Box 1: DeployIfNotExists** This effect ensures that a template deployment will only occur if a custom antimalware extension is *not* already present on the VM. If it's already installed, the policy won't attempt to reinstall it. **Box 2: Template** This requires a JSON template defining the deployment of the custom antimalware extension. This template will contain the specifics needed to install the extension onto the VM. The `Deployment` property within the `Template` should include the full ARM template to be deployed. The provided discussion overwhelmingly supports this answer with numerous users indicating its correctness and providing citations to relevant Microsoft documentation. Why other options are incorrect: There are no other options provided in the question itself. The question is a fill-in-the-blank style hotspot question focusing on the appropriate policy effect and template usage. Other policy effects would not appropriately address the need for only deploying the extension if it's not already present.
91
[View Question](https://www.examtopics.com/discussions/databricks/view/13685-exam-az-500-topic-3-question-16-discussion/) You are configuring an Azure Kubernetes Service (AKS) cluster that will connect to an Azure Container Registry. You need to use the auto-generated service principal to authenticate to the Azure Container Registry. What should you create? A. an Azure Active Directory (Azure AD) group B. an Azure Active Directory (Azure AD) role assignment C. an Azure Active Directory (Azure AD) user D. a secret in Azure Key Vault
B. an Azure Active Directory (Azure AD) role assignment The correct answer is B because when an AKS cluster is created, Azure automatically generates a service principal. This service principal needs permissions to access the Azure Container Registry (ACR) to pull images. To grant these permissions, an Azure AD role assignment must be created, assigning the appropriate role (e.g., ACR Puller) to the service principal, allowing it to authenticate and interact with the ACR. Options A, C, and D are incorrect: * **A. an Azure Active Directory (Azure AD) group:** While Azure AD groups are useful for managing users and permissions, they are not directly used for granting access to a service principal to an ACR. The service principal itself needs the role assignment. * **C. an Azure Active Directory (Azure AD) user:** This is irrelevant as you are dealing with a service principal (a non-human account) and not a user account. * **D. a secret in Azure Key Vault:** While secrets can store credentials, they don't directly grant permissions. A role assignment is the mechanism to grant the service principal the necessary access rights. Note: While the provided discussion overwhelmingly supports option B, there is one comment suggesting the use of RBAC instead of an Azure AD role assignment. This highlights a possible nuance or ambiguity in terminology. However, within the context of the question and prevalent responses, an Azure AD role assignment is the accepted and correct approach to achieve the required access.
92
**** [View Question](https://www.examtopics.com/discussions/databricks/view/137412-exam-az-500-topic-6-question-24-discussion/) You have an Azure subscription that contains the resources shown in the following table. | Resource | Type | Location | | -------------- | ---------------- | -------- | | VM1 | Virtual Machine | East US | | VNet1 | Virtual Network | East US | | NSG1 | Network Security Group | East US | | Storage1 | Storage Account | West US | You need to configure network connectivity to meet the following requirements: * Communication from VM1 to storage1 must traverse an optimized Microsoft backbone network. * All the outbound traffic from VM1 to the internet must be denied. * The solution must minimize costs and administrative effort. What should you configure for VNet1 and NSG1? To answer, drag the appropriate components to the correct resources. Each component may be used once, more than once, or not at all. **
** The discussion reveals conflicting opinions on the correct answer. The suggested answer image is missing, making a definitive answer based solely on the provided text impossible. However, we can analyze the discussion and arrive at a *likely* correct solution and explanation for why other options are less suitable. **Likely Correct Configuration:** * **VNet1:** Private Endpoint * **NSG1:** Outbound rule denying traffic to the service tag "Internet". **Explanation:** The requirement for communication between VM1 (East US) and Storage1 (West US) over an optimized Microsoft backbone network *while minimizing costs* points to a Private Endpoint. Service Endpoints, while cost-effective, are generally designed for same-region communication, as several users noted. A Private Endpoint provides secure, private connectivity regardless of region. Denying all outbound internet traffic from VM1 necessitates an outbound rule in NSG1 that blocks traffic to the "Internet" service tag. **Why Other Options Are Less Suitable:** * **VNet1: Service Endpoint:** While cost-effective, service endpoints are not suitable for cross-region connectivity between VM1 and Storage1. * **NSG1: Inbound Rule:** Blocking internet access requires an *outbound* rule, not an inbound rule. Inbound rules control traffic *entering* the VM. **Note:** The provided discussion shows disagreement among users regarding the optimal solution. Some suggest a service endpoint, while others strongly advocate for a private endpoint due to the cross-region requirement. This question requires a nuanced understanding of Azure networking and the tradeoffs between different approaches. The answer presented here reflects the more likely and generally accepted solution given the constraints, but the debate highlights the need for careful consideration in real-world scenarios.
93
[View Question](https://www.examtopics.com/discussions/databricks/view/137461-exam-az-500-topic-4-question-119-discussion/) You have an Azure subscription that uses Microsoft Defender for Cloud. You need to add a custom security recommendation to Defender for Cloud. The recommendation must be assigned the custom severity rating of the subscription. What should you create? A. an exemption B. an initiative definition C. a policy definition D. an assignment
C. a policy definition Explanation: To add a custom security recommendation to Azure Defender for Cloud and assign a custom severity rating at the subscription level, you need to create a policy definition. Policy definitions are used to define the rules and desired configurations, including custom recommendations and their severity levels. The policy definition is then assigned (using an assignment) to the subscription scope to enforce the desired security posture. Why other options are incorrect: * **A. an exemption:** Exemptions are used to exclude specific resources from a policy assignment, not to create a new custom recommendation. * **B. an initiative definition:** Initiatives are collections of policy assignments, but you need to define the custom recommendation itself first before grouping it within an initiative. * **D. an assignment:** Assignments apply existing policy definitions to specific scopes. You must have the policy definition created first. Note: The discussion shows unanimous agreement on the correct answer being C.
94
**** [View Question](https://www.examtopics.com/discussions/databricks/view/13747-exam-az-500-topic-13-question-3-discussion/) HOTSPOT - You are evaluating the effect of the application security groups on the network communication between the virtual machines in Sub2. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0022000001.png) *(Image shows a network diagram with VMs, Subnets, and ASGs)* Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0022000002.png) *(Image shows the correct answers to the hotspot questions: Box 1: No, Box 2: Yes, Box 3: Yes)* NSG3 has the inbound security rules shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0022100001.png) *(Image shows a table of inbound security rules for NSG3, including source ASGs, protocols, ports, and actions.)* **
** The correct answers are: * **Box 1: No.** VM4 (in Subnet13 with NSG3) will not receive ICMP pings from VM1 (in ASG1). NSG3's rules only allow ICMP from ASG2, not ASG1. Only TCP traffic is permitted from ASG1 to VM4 according to NSG3's rules. * **Box 2: Yes.** VM2 (in ASG2) can receive ICMP pings because NSG3 allows any protocol from ASG2. * **Box 3: Yes.** VM1 (in ASG1) can connect to the web server. NSG3 allows TCP traffic from ASG1, and web server connections typically use TCP ports 80 and 443. **Why other options are incorrect:** The provided solution directly addresses why each box has its respective answer. The discussion highlights a common misunderstanding where users incorrectly assume that "ping" uses TCP instead of ICMP. This is clarified in the solution and the discussion.
95
**** [View Question](https://www.examtopics.com/discussions/databricks/view/137513-exam-az-500-topic-2-question-115-discussion/) You have an Azure subscription that contains an Azure web app named App1. You plan to configure a Conditional Access policy for App1. The solution must meet the following requirements: * Only allow access to App1 from Windows devices. * Only allow devices that are marked as compliant to access App1. Which Conditional Access policy settings should you configure? Drag the appropriate settings to the correct requirements. Each setting may be used once, more than once, or not at all. ![Image](https://img.examtopics.com/az-500/image741.png) **
** * **Requirement 1: Only allow access to App1 from Windows devices:** **Conditions -> Device platforms -> Windows** * **Requirement 2: Only allow devices that are marked as compliant to access App1:** **Grant -> Require device to be marked as compliant** **Explanation:** To restrict access to App1 based on device type (Windows), the "Device platforms" condition within the "Conditions" section of the Conditional Access policy must be configured to include only "Windows". To enforce compliance, the "Grant" controls must include "Require device to be marked as compliant". This ensures that only devices that have been assessed and marked compliant within Azure's device management system will be granted access. **Why other options are incorrect (based on provided text):** The provided text focuses on these two specific requirements. Other potential Conditional Access settings (like user risk, sign-in risk, location, etc.) are not relevant to fulfilling these particular requirements. The discussion mentions other grant controls (MFA, app protection policy etc.), but these are not necessary to fulfill the stated requirements. **Note:** The suggested answer image (image742.png) was not directly included in the provided text, so it couldn't be used for direct comparison. The answer provided here is derived solely from the question and user discussion.
96
[View Question](https://www.examtopics.com/discussions/databricks/view/137514-exam-az-500-topic-3-question-66-discussion/) You have a Microsoft Entra tenant named Contoso.com and an Azure Kubernetes Service (AKS) cluster AKS1. You discover that AKS1 cannot be accessed by using accounts from Contoso.com. You need to ensure AKS1 can be accessed by using accounts from Contoso.com. The solution must minimize administrative effort. What should you do first? A. From Azure, recreate AKS1. B. From AKS1, upgrade the version of Kubernetes. C. From Microsoft Entra, add a Microsoft Entra ID P2 license. D. From Microsoft Entra, configure the User settings.
D. From Microsoft Entra, configure the User settings. The most efficient first step to troubleshoot AKS access issues for Contoso.com accounts is to check and configure the user settings within Microsoft Entra. This involves verifying that the necessary permissions and authentication policies are correctly set up to allow Contoso.com users access to the AKS1 cluster. Recreating the AKS cluster (A) or upgrading Kubernetes (B) are more drastic measures and may not resolve the access issue if the problem lies in the authentication and authorization configuration. Adding a Microsoft Entra ID P2 license (C) is unlikely to be the solution, as access control is independent of licensing. Note: There is some disagreement among the discussion participants regarding the best answer. Some suggest recreating the AKS cluster as the first step, implying that the Azure AD integration might have failed during cluster creation. However, the question asks for the solution that minimizes administrative effort, making the user settings configuration the preferable starting point.
97
**** [View Question](https://www.examtopics.com/discussions/databricks/view/137515-exam-az-500-topic-3-question-67-discussion/) You need to ensure that the events in the NetworkSecurityGroupRuleCounter log of the VNET01-Subnet0-NSG network security group (NSG) are stored in the logs1234578 Azure Storage account. To complete this task, sign in to the Azure portal. (Image shows a screenshot of the Azure portal's Diagnostic settings page, with fields for Name, Log, Destination details, etc.) **
** To configure diagnostic settings for the VNET01-Subnet0-NSG network security group to send NetworkSecurityGroupRuleCounter logs to the logs1234578 Azure Storage account, follow these steps: 1. **Sign in to the Azure portal.** 2. **Navigate to the Network Security Groups section.** This can usually be found by searching for "Network security groups" in the search bar. 3. **Select the VNET01-Subnet0-NSG network security group.** 4. **In the settings pane on the left, click on Diagnostics settings.** 5. **Click on Add diagnostic setting.** 6. **Provide a name for the setting.** (e.g., "NSG-Diagnostics") 7. **Check the "Send to Log Analytics workspace" option.** While the original question implies sending directly to storage, the provided solution points to Log Analytics. This discrepancy should be noted. Log Analytics provides a central point to collect and analyze data from various Azure services, including storage accounts. This is a more common configuration for monitoring NSG rules. 8. **In the Log section, ensure that the NetworkSecurityGroupRuleCounter is checked.** 9. **In the Destination details section, select the logs1234578 storage account.** This assumes that the storage account is already properly configured for receiving diagnostic logs. 10. **Click on Save.** **Why other options are incorrect (N/A):** No other options are provided in the source material. The provided solution is a step-by-step guide based on user comments. **Note:** The solution provided in the discussion uses "Send to Log Analytics workspace" instead of directly sending to the storage account as implied by the initial question. There might be different valid approaches to achieve the goal, depending on the specific requirements and the broader Azure architecture. The suggested answer shows that directing the logs to Log Analytics is a valid and likely preferred solution in a real-world scenario.
98
**** [View Question](https://www.examtopics.com/discussions/databricks/view/137568-exam-az-500-topic-6-question-23-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image744.png) You create an Azure DDoS Protection plan named DDoS1 in the West US Azure region. Which resources can you add to DDoS1? A. VNet1 only B. WebApp1 only C. VNet1 and VNet2 only D. VNet1 and WebApp1 only E. VNet1, VNet2, and WebApp1 **
** C. VNet1 and VNet2 only Azure DDoS Protection plans protect resources within virtual networks. WebApp1, being an App Service resource, is not directly protected by a DDoS Protection plan. Only virtual networks (VNet1 and VNet2 in this case) can be directly added to a DDoS protection plan. While a web app can be indirectly protected (e.g., through a protected virtual network or WAF), it cannot be directly added to the plan. **Why other options are incorrect:** * **A. VNet1 only:** This is incorrect because VNet2 is also within the subscription and can be added to the DDoS protection plan. * **B. WebApp1 only:** Incorrect. Web Apps cannot be directly added to a DDoS Protection plan. * **D. VNet1 and WebApp1 only:** Incorrect because WebApp1 cannot be directly added. * **E. VNet1, VNet2, and WebApp1:** Incorrect because WebApp1 cannot be directly added. **Note:** The discussion shows some disagreement on the correct answer. While some users initially suggest that Web Apps can be indirectly protected and thus included (Option E), the consensus and ultimately correct interpretation is that only virtual networks can be *directly* added to a DDoS Protection plan. The indirect protection methods are not within the scope of this specific question.
99
[View Question](https://www.examtopics.com/discussions/databricks/view/137600-exam-az-500-topic-7-question-12-discussion/) You have an Azure subscription that contains an Azure Data Lake Storage account named sa1. You plan to deploy an app named App1 that will access sa1 and perform operations, including Read, List, Create Directory, and Delete Directory. You need to ensure that App1 can connect securely to sa1 by using a private endpoint. What is the minimum number of private endpoints required for sa1? A. 1 B. 2 C. 3 D. 4 E. 5
B. 2 Explanation: The correct answer is 2 because Data Lake Storage Gen2 (which is what sa1 likely is, given the operations listed) and Blob Storage are closely intertwined. Operations targeting the Data Lake Storage Gen2 endpoint may be redirected to the Blob endpoint. To ensure all operations (Read, List, Create Directory, Delete Directory) succeed, private endpoints for *both* Data Lake Storage Gen2 and Blob Storage are necessary. Creating a private endpoint for only one will result in some operations failing. Why other options are incorrect: * **A. 1:** Incorrect, as explained above, one private endpoint is insufficient to cover all operations. * **C. 3, D. 4, E. 5:** These are excessive. The minimum required is two. Note: The discussion shows some disagreement on the answer; one user incorrectly suggests 1. However, the explanation provided by Nava702 and the cited Microsoft documentation strongly support the answer of 2.
100
**** [View Question](https://www.examtopics.com/discussions/databricks/view/137715-exam-az-500-topic-2-question-112-discussion/) You have a Microsoft Entra tenant that contains the users shown in the following table. (Image shows a table with users User1, User2, Admin1, Admin2, and their respective group memberships: Group1, Group2). You configure the Temporary Access Pass settings as shown in the following exhibit. (Image shows Temporary Access Pass settings, including roles with permissions to create, delete, and view TAPs). You add the Temporary Access Pass authentication method to Admin2. For each of the following statements, select Yes if the statement is true. Otherwise, select No. (Image shows three statements: 1. User1 can create a temporary access pass for Admin2. 2. Admin1 can create a temporary access pass for User1. 3. Admin2 can create a temporary access pass for themselves.) **
** The correct answers are: 1. **No:** User1 cannot create a temporary access pass for Admin2. Based on the provided information regarding roles and permissions for creating TAPs, User1 does not have the necessary permissions. The image shows User1 is not a member of Group1, further limiting their access. 2. **Yes:** Admin1 can create a temporary access pass for User1. Admins typically have the permission to manage TAPs for other users. The provided text describing roles and permissions supports this. 3. **No:** Admin2 cannot create a temporary access pass for themselves. The text explicitly states that users cannot create TAPs for themselves, regardless of their role. **Why other options are incorrect:** The explanation for each answer above details why the specific options are incorrect based on the provided information about user roles and permissions regarding Temporary Access Pass creation. **Note:** The provided text mentions different roles (Global Administrators, Privileged Authentication Administrators, Authentication Administrators, Global Readers) and their respective permissions regarding TAP management. While the question does not explicitly define the roles of User1 and Admin1, the inference is made based on typical role-based access control (RBAC) models. If User1 or Admin1 had different assigned roles than those implied, the answers could change. There is no direct definitive mapping of users to roles in the presented material beyond Admin2 having a TAP method added.
101
[View Question](https://www.examtopics.com/discussions/databricks/view/137734-exam-az-500-topic-2-question-113-discussion/) Your network contains an on-premises Active Directory domain named adatum.com that syncs to a Microsoft Entra tenant. The Microsoft Entra tenant contains the users shown in the following table. (Image of user table omitted as not provided in text). You configure the Microsoft Entra Password Protection settings for adatum.com as shown in the following exhibit. (Image of password protection settings omitted as not provided in text). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image of statements to evaluate omitted as not provided in text).
The provided text does not give the statements to evaluate, making it impossible to provide a definitive answer. The suggested answer image is also not included. The discussion mentions a similar question (#62 Topic 2), suggesting that the question and answer are likely available elsewhere. Without the statements to evaluate, a correct answer cannot be given. Other options are impossible to evaluate and explain as incorrect without the missing images containing the statements to assess.
102
**** [View Question](https://www.examtopics.com/discussions/databricks/view/137764-exam-az-500-topic-2-question-114-discussion/) You have a Microsoft Entra tenant that contains the users shown in the following table. | User | Group Membership | |---|---| | User1 | Group1 | | User2 | Group2 | | User3 | Group1, Group2 | From Microsoft Entra Privileged Identity Management (PIM), you configure the settings for the Security Administrator role as shown in the following exhibit. *(Note: The image depicting the PIM settings is missing from the provided text, but the crucial information is included in the following text)*. The settings include: no approval required for the Security Administrator role. From PIM, you assign the Security Administrator role to the following groups: • Group1: Active assignment type, permanently assigned • Group2: Eligible assignment type, permanently eligible For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 1. User1 requires approval to activate the Security Administrator role. 2. User2 requires approval to activate the Security Administrator role. 3. User3 requires approval to activate the Security Administrator role. **
** 1. **No**. User1 is a member of Group1, which has an "Active assignment type, permanently assigned". This means User1 has the Security Administrator role continuously without needing any activation or approval. 2. **Yes**. User2 is a member of Group2, which has an "Eligible assignment type, permanently eligible". This means User2 needs to activate the role themselves, however, the question states that *no approval* is required for the Security Administrator role. Therefore, although activation is needed, approval isn't. The question is specifically asking about approval. 3. **No**. User3 is a member of both Group1 and Group2. Since User3 is a member of Group1 (Active assignment), they have the Security Administrator role permanently assigned and do not require activation or approval. **Why other options are incorrect:** The explanations above detail why each answer is correct and implicitly explain why the opposite is incorrect. The key is understanding the difference between "Active assignment" and "Eligible assignment" in Microsoft Entra PIM. An active assignment grants the role immediately, while an eligible assignment requires activation (but in this case, not approval). **Note:** The provided text doesn't show the image of the PIM settings, however the crucial detail regarding the lack of approval requirement is provided in the text. The answer is based on the information given.
103
**** [View Question](https://www.examtopics.com/discussions/databricks/view/138312-exam-az-500-topic-7-question-10-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-500/image748.png) The subscription contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-500/image749.png) You have a storage account named contoso2024 that contains the following resources: • A container named Container1 that contains a file named File1 • A file share named Share1 that contains a file named File2 You create a private endpoint for contoso2024 as shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image750.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image751.png) **
** The correct answers are: * **Statement 1: Yes.** The private endpoint is accessible via the peered VNet (VNet1). The provided discussion explicitly states this. VNet peering allows communication between VNets, and the private endpoint, being within VNet2, is accessible from the peered VNet1, provided appropriate DNS resolution is configured. * **Statement 2: Yes.** The private endpoint is assigned to VNet2 and Subnet21. This is clearly shown in the provided image depicting the private endpoint configuration. * **Statement 3: No.** The private endpoint is configured for Blob storage, not file shares. The discussion supports this, pointing out that the private endpoint settings are targeted at blobs, not file shares. Access to File2 within Share1 would therefore not be possible through this private endpoint. **Why other options are incorrect:** The explanation above details why each answer is correct or incorrect, based on the provided materials and discussion. **Note:** The discussion shows some disagreement and clarification about network configurations needed for this scenario to work correctly (especially concerning DNS resolution). The provided answers are based on the assumption that necessary configurations (like appropriate DNS settings) are already in place to enable communication through the peering. The answer also acknowledges the private endpoint being specifically configured for blob storage, which is a key detail determining access to specific resources within the storage account.
104
**** [View Question](https://www.examtopics.com/discussions/databricks/view/13837-exam-az-500-topic-3-question-32-discussion/) You have Azure virtual machines that have Update Management enabled. The virtual machines are configured as shown in the following table. | VM Name | OS Name | Region | Resource Group | Subscription | | :------- | :------------------ | :-------- | :------------- | :----------- | | VM1 | Windows Server 2016 | East US | RG1 | Sub1 | | VM2 | Windows Server 2016 | West US | RG2 | Sub2 | | VM3 | Windows Server 2016 | East US | RG1 | Sub1 | | VM4 | CentOS 7.5 | East US | RG1 | Sub1 | | VM5 | CentOS 7.5 | West US | RG2 | Sub2 | | VM6 | CentOS 7.5 | East US | RG1 | Sub1 | You schedule two update deployments named Update1 and Update2. Update1 updates VM3. Update2 updates VM6. Which additional virtual machines can be updated by using Update1 and Update2? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** Update1 (updating VM3, Windows Server 2016) can additionally update VM1 and VM2. Update2 (updating VM6, CentOS 7.5) can additionally update VM4 and VM5. **Explanation:** Update Management groups VMs based on their operating system (OS). Windows VMs can only be included in update deployments with other Windows VMs, and Linux VMs can only be included with other Linux VMs. The location (region, resource group, subscription) of the VMs does not affect their inclusion in an update deployment, as long as they share the same OS. Therefore, because VM1 and VM2 are also Windows Server 2016, they can be included in Update1. Similarly, because VM4 and VM5 are also CentOS 7.5, they can be included in Update2. **Why other options are incorrect:** Any option that mixes Windows and Linux VMs within a single update deployment would be incorrect based on the provided information. **Note:** The discussion shows unanimous agreement on the answer.
105
**** [View Question](https://www.examtopics.com/discussions/databricks/view/139414-exam-az-500-topic-2-question-116-discussion/) Your network contains an on-premises Active Directory domain that syncs to a Microsoft Entra tenant. The tenant contains the users shown in the following table. | User Name | Group Membership | |---|---| | User1 | Group1 | | User2 | Group2 | | User3 | Group1 | The tenant contains the groups shown in the following table. | Group Name | Members | |---|---| | Group1 | User1, User3 | | Group2 | User2 | You configure a multi-factor authentication (MFA) registration policy that has the following settings: • Assignments: o Include: Group1 o Exclude: Group2 • Controls: Require Azure MFA registration • Enforce Policy: On For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. | Statement | True/False | |---|---| | User1 is required to register for MFA | | | User2 is required to register for MFA | | | User3 is required to register for MFA | | **
** Yes, No, Yes * **User1:** Yes. User1 is a member of Group1, which is included in the MFA registration policy. Therefore, User1 is required to register for MFA. * **User2:** No. User2 is a member of Group2, which is explicitly excluded from the MFA registration policy. Therefore, User2 is not required to register for MFA. * **User3:** Yes. User3 is a member of Group1, which is included in the MFA registration policy. Therefore, User3 is required to register for MFA. The on-premises sync is irrelevant because the policy applies to users already present in the Azure AD tenant. **Why other options are incorrect:** The discussion shows disagreement on the interpretation of the on-premises sync's impact. Some users incorrectly believe that because the users are synced from on-premises AD, the MFA policy doesn't apply. However, the question clearly states that the users are *already* in the Microsoft Entra tenant. The policy operates within the tenant and is not contingent on the synchronization process itself. The provided answer correctly interprets the policy's scope and application based solely on group membership.
106
[View Question](https://www.examtopics.com/discussions/databricks/view/139416-exam-az-500-topic-2-question-118-discussion/) You have a Microsoft Entra tenant named contoso.com. You have a partner company that has a Microsoft Entra tenant named fabrikam.com. You need to ensure that when a user in fabrikam.com attempts to access resources in contoso.com, the user only receives a single Microsoft Entra Multi-Factor Authentication (MFA) prompt. The solution must minimize administrative effort. What should you do? A. From the Azure portal of contoso.com, configure the inbound access default settings. B. From the Azure portal of contoso.com, configure the External collaboration settings. C. From the Azure portal of contoso.com, configure the outbound access default settings. D. From the Azure portal of fabrikam.com, configure the outbound access default settings.
The correct answer is **A. From the Azure portal of contoso.com, configure the inbound access default settings.** To ensure a single MFA prompt for users from fabrikam.com accessing contoso.com resources, you need to configure the inbound trust settings in contoso.com's Azure portal. This allows contoso.com to trust the MFA claims from fabrikam.com, preventing a second MFA prompt. Configuring inbound settings specifically addresses authentication requests coming *into* contoso.com from external tenants. Why other options are incorrect: * **B. From the Azure portal of contoso.com, configure the External collaboration settings:** This option is incorrect. External collaboration settings manage guest access and permissions, not the MFA trust relationship between tenants. The discussion highlights this point. * **C. From the Azure portal of contoso.com, configure the outbound access default settings:** This option is incorrect because it deals with authentication requests originating from *within* contoso.com and going *out*. The scenario requires configuring settings within contoso.com to handle requests coming in. * **D. From the Azure portal of fabrikam.com, configure the outbound access default settings:** This option is incorrect. The problem requires a configuration change in the tenant (contoso.com) that is being accessed, not the tenant (fabrikam.com) originating the access. Note: The discussion shows some disagreement about the best answer. While option A is the most widely accepted solution to fulfill the prompt's requirements, some users suggest that a more targeted approach (configuring the settings only for fabrikam.com instead of the default inbound settings) would be a better practice. However, based purely on the question and the information provided, option A remains the best fit.
107
**** [View Question](https://www.examtopics.com/discussions/databricks/view/139453-exam-az-500-topic-2-question-119-discussion/) You have a Microsoft Entra tenant. On January 1, you configure a multi-factor authentication (MFA) registration policy that has the following settings: • Assignments: All users • Require Microsoft Entra ID multifactor authentication registration: Enabled • Enforce policy: On On January 3, you create two new users named User1 and User2. On January 5, User1 authenticates to Microsoft Entra ID for the first time. On January 7, User2 authenticates to Microsoft Entra ID for the first time. On which date will User1 and User2 be forced to register for MFA? To answer, drag the appropriate dates to the correct users. Each date may be used once, more than once, or not at all. **
** User1 will be forced to register for MFA on January 19th, and User2 will be forced to register for MFA on January 21st. **Explanation:** The policy requires all users to register for MFA upon their first authentication. However, there's a 14-day grace period before enforcement. Therefore, User1's first login (January 5th) plus 14 days is January 19th. Similarly, User2's first login (January 7th) plus 14 days is January 21st. **Why other options are incorrect:** The discussion mentions conflicting information regarding the 14-day grace period. Some comments suggest the grace period may no longer exist (as of January 2025), while others confirm a 14-day grace period. The provided answer is based on the commonly understood 14-day grace period. If the 14-day grace period is removed, the answer would be January 5th for User1 and January 7th for User2. This answer reflects the most prevalent understanding indicated in the discussion but acknowledges the existing disagreement.
108
[View Question](https://www.examtopics.com/discussions/databricks/view/139490-exam-az-500-topic-2-question-117-discussion/) You have a Microsoft Entra tenant named contoso.com. You plan to collaborate with a partner organization that has a Microsoft Entra tenant named fabrikam.com. Fabrikam.com uses the following identity providers: • Google Cloud Platform (GCP) • Microsoft accounts • Microsoft Entra ID You need to configure the Cross-tenant access settings for B2B collaboration. Which identity providers support cross-tenant access? A. Microsoft Entra ID only B. GCP and Microsoft Entra ID only C. Microsoft accounts and Microsoft Entra ID only D. GCP, Microsoft accounts, and Microsoft Entra ID
A. Microsoft Entra ID only Cross-tenant access, in the context of Microsoft Entra ID's B2B collaboration, specifically refers to the interaction and access between different Microsoft Entra ID tenants. Other identity providers, like GCP or Microsoft accounts, while they might be used for general B2B access, do not participate directly in the "cross-tenant" access settings configuration within Microsoft Entra ID. Therefore, only Microsoft Entra ID supports cross-tenant access in this scenario. Why other options are incorrect: * **B, C, and D:** These options incorrectly include GCP and/or Microsoft accounts. While these can be used for broader B2B collaboration, they are not involved in the *cross-tenant access settings* specifically designed for interactions between Microsoft Entra ID tenants. Note: The discussion shows some disagreement on the correct answer. While several participants initially selected D, the consensus and the provided explanation lean toward A as the most accurate answer, reflecting the specific functionality of cross-tenant access settings within Microsoft Entra ID.
109
[View Question](https://www.examtopics.com/discussions/databricks/view/139522-exam-az-500-topic-4-question-121-discussion/) You have an Azure subscription named Sub1 that contains two resource groups named RGnet and NET. You have the Azure Policy definition shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image761.png) You assign the policy definition to Sub1 and NET. You plan to deploy the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image762.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image763.png)
Y Y N * **Statement 1 (VNet1 to RGnet): Yes.** The policy only blocks deployments *not* belonging to the `Microsoft.Network` resource provider. Since VNet1 is a virtual network and belongs to `Microsoft.Network`, it's allowed. The policy's effect on the resource group name ("net" in RGnet) is irrelevant in this case. * **Statement 2 (Storage1 to NET): Yes.** Similar to the above, Storage1 belongs to `Microsoft.Storage`, *not* `Microsoft.Network`. However, the policy is assigned to the subscription and NET resource group. The policy does not directly prevent the deployment of Storage1 to the NET resource group because it does not belong to the resource provider specified in the policy's `not` condition. * **Statement 3 (Storage1 to RGnet): No.** Storage1 (resource provider: `Microsoft.Storage`) is blocked by the policy because it does *not* belong to `Microsoft.Network`. The resource group name "RGnet" containing "net" is irrelevant because the condition in the policy focuses on the resource provider. The provided explanation from Codelawdepp accurately reflects the correct answer. There is no significant disagreement in the discussion.
110
**** [View Question](https://www.examtopics.com/discussions/databricks/view/139579-exam-az-500-topic-6-question-25-discussion/) You have an Azure subscription that contains an Azure firewall named AzFW1. AzFW1 has a firewall policy named FWPolicy1. You need to add rule collections to FWPolicy1 to meet the following requirements: • Allow traffic based on the FQDN of the destination. • Allow TCP traffic. Which types of rule collections should you add for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image766.png) (This image shows a multiple choice question with options for Network rules and Application rules, with sub-options for each.) ![Image](https://img.examtopics.com/az-500/image767.png) (This image shows the suggested answer, indicating Application rules for FQDN based traffic and Network rules for TCP traffic.) **
** To meet the requirements, you should add the following rule collections: * **For allowing traffic based on the FQDN of the destination:** **Application rules**. Application rules in Azure Firewall allow you to filter traffic based on FQDNs, which is crucial for controlling access to specific web services. * **For allowing TCP traffic:** **Network rules**. Network rules provide granular control over network traffic based on IP addresses, ports, and protocols. Since the requirement is simply to allow TCP traffic, network rules are the appropriate choice. The discussion mentions three rule types in Azure Firewall: DNAT, Network rules, and Application rules. While the discussion notes the order of execution (DNAT, Network, Application), this doesn't affect which rule type is best suited for this question, since both requirements can be fulfilled simultaneously. **Why other options are incorrect:** The question is multiple-choice, and the image of the suggested answer is the correct option based on the nature of each rule type. Using the wrong rule type wouldn't allow the specified traffic. For example, using Network rules for FQDN-based traffic wouldn't correctly filter by domain name. **Note:** The provided discussion contains additional information about Azure Firewall deployment and rule priorities. However, this extra information is not directly relevant to answering this specific question about selecting the correct rule collection types to fulfill the given requirements.
111
[View Question](https://www.examtopics.com/discussions/databricks/view/139701-exam-az-500-topic-4-question-123-discussion/) You have an Azure subscription that contains the Azure App Service web apps shown in the following table. | App Name | Resource Group | Region | OS | App Service Plan | |---|---|---|---|---| | App1 | RG1 | West US | Linux | ASP1 | | App2 | RG1 | West US | Linux | ASP1 | | App3 | RG2 | West US | Windows | ASP2 | | App4 | RG1 | East US | Linux | ASP1 | You upload a private key certificate named Cert1.pfx to App1. Which apps can use Cert1? A. App1 only B. App1 and App2 only C. App1 and App4 only D. App1, App2, and App3 only E. App1, App2, App3, and App4
A. App1 only Explanation: Private certificates in Azure App Service are shared only within the same deployment unit. A deployment unit is defined by the App Service plan's resource group, region, and operating system. App1, App2, and App4 share the same Resource Group (RG1), Region (West US), and OS (Linux). However, the certificate is specifically uploaded to App1. While App2 shares the deployment unit with App1, access to the certificate needs to be explicitly granted. The provided Microsoft documentation confirms that private certificates are *not automatically* shared across apps, even within the same deployment unit. App3 is in a different Resource Group (RG2) and OS (Windows), thus it cannot access Cert1. App4 is in the same Resource Group and OS as App1 and App2 but a different region (East US) and therefore cannot access Cert1. Why other options are incorrect: * **B, C, D, and E:** These options incorrectly assume that private certificates are automatically shared across all apps within the same resource group or across different regions/OS, which is not the case. The certificate is confined to its originating App Service unless explicitly configured otherwise. The discussion highlights that while sharing *is possible* within the same deployment unit, it does not happen automatically. Therefore, only App1 has access to the uploaded certificate.
112
**** [View Question](https://www.examtopics.com/discussions/databricks/view/139717-exam-az-500-topic-6-question-26-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-500/image768.png) The subscription contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-500/image769.png) All the virtual machines have only private IP addresses. You deploy Azure Bastion to VNet1 as shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image770.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image771.png) **
** The correct answers are: 1. **Yes:** You can connect to VM1 via Bastion using the Remote Desktop Connection client. VM1 is in VNet1, where the Azure Bastion is deployed. Azure Bastion allows RDP connections to VMs within the same VNet. 2. **Yes:** You can connect to VM2 via Bastion using the Remote Desktop Connection client. VM2 is in VNet2, which has peering with VNet1 where the Azure Bastion is located. This peering allows Azure Bastion to connect to VMs in VNet2. 3. **No:** You cannot connect to VM3 via Bastion using the Remote Desktop Connection client. VM3 resides in VNet3, which does *not* have peering with VNet1 (where the Bastion is). Without peering, Azure Bastion cannot reach VMs in VNet3. **Explanation of why other options are incorrect and discussion of disagreements:** There is a disagreement among the users in the discussion regarding the capabilities of the Azure Bastion Basic SKU and whether it supports RDP. Some users claim that the Basic SKU does not support RDP, while others contend that it does. The answer provided above assumes that the Azure Bastion deployment uses a SKU that *does* support RDP. If the deployment used a Basic SKU that lacked RDP support, then answer 1 and 2 would be incorrect. The correct answer hinges on the unspecified Azure Bastion SKU in use. Further clarification on the specifics of the Azure Bastion deployment is necessary to fully resolve the discrepancy.
113
**** [View Question](https://www.examtopics.com/discussions/databricks/view/139794-exam-az-500-topic-4-question-120-discussion/) You have an Azure key vault. You need to delegate administrative access to the key vault to meet the following requirements: • Provide a user named User1 with the ability to set access policies for the key vault. • Provide a user named User2 with the ability to add and delete certificates in the key vault. • Use the principle of least privilege. What should you use to assign access to each user? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image759.png) (This image shows a table with User1 and User2, and checkboxes for "Key Vault Access Policy" and "Azure RBAC Role") **
** For **User1**, assign the **Azure RBAC Role** of "Key Vault Administrator". For **User2**, assign the **Key Vault Access Policy** that grants permissions to add and delete certificates. **Explanation:** User1 needs to manage access policies, which is a management plane operation. Key Vault Access Policies control data plane access (reading, writing, deleting keys and secrets). Azure RBAC roles manage access to the Key Vault itself, including the ability to set access policies. Therefore, an RBAC role with sufficient permissions is necessary for User1. The "Key Vault Administrator" role fulfills this requirement with the principle of least privilege. User2 only needs to manage certificates within the Key Vault (a data plane operation). While RBAC roles *can* grant this access, a Key Vault Access Policy is more precise and adheres to the principle of least privilege, only granting the specific certificate management permissions. Assigning a Key Vault Access Policy directly to User2 avoids granting unnecessary administrative privileges that would come with a broader RBAC role. **Why other options are incorrect:** Assigning a Key Vault Access Policy to User1 would be incorrect because it only grants data plane access, not the management plane access needed to set access policies. Assigning an RBAC role other than "Key Vault Administrator" to User1 may grant the needed capability, but it might also grant excess privileges, violating the principle of least privilege. Assigning only an RBAC role to User2 might be overly permissive; a Key Vault Access Policy allows for finer-grained control. **Note:** The discussion shows some disagreement on the specific RBAC role needed for User2, with suggestions including "Key Vault Certificates Officer." However, the chosen solution prioritizes least privilege by using a Key Vault Access Policy for more granular control.
114
[View Question](https://www.examtopics.com/discussions/databricks/view/139916-exam-az-500-topic-4-question-122-discussion/) Your company has an Azure subscription named Sub1. You plan to create several security alerts by using Azure Monitor. You need to prepare Sub1 for the alerts. What should you create first? A. an Azure Automation account B. an Azure event hub C. an Azure Log Analytics workspace D. an Azure Storage account
C. An Azure Log Analytics workspace Azure Monitor uses Log Analytics workspaces to store and analyze the data collected from various Azure services and resources. Before you can create security alerts, you need a workspace to store the security logs that will trigger those alerts. Therefore, creating a Log Analytics workspace is the first necessary step. Why other options are incorrect: * **A. an Azure Automation account:** While Azure Automation can be used for various tasks, including reacting to alerts, it's not the primary storage location for security logs that Azure Monitor uses to generate alerts. It's more of a reaction mechanism *after* the alert is triggered. * **B. an Azure event hub:** Azure Event Hubs are used for high-throughput data ingestion, often used as a pipeline *into* a Log Analytics workspace. They are not the primary repository for the data Azure Monitor uses to generate alerts. * **D. an Azure Storage account:** While Azure Storage can store various types of data, it's not directly used by Azure Monitor for security alerts in the same way a Log Analytics workspace is. Log Analytics is optimized for log analysis and querying, making it far more suitable for this purpose. There is a consensus among the users in the provided discussion that the correct answer is C.
115
[View Question](https://www.examtopics.com/discussions/databricks/view/141309-exam-az-500-topic-7-question-11-discussion/) You have an Azure subscription that contains an Azure Kubernetes Service (AKS) cluster named AKS1. You have an Azure container registry that stores container images that were deployed by using Azure DevOps Microsoft-hosted agents. You need to ensure that administrators can access AKS1 only from specific networks. The solution must minimize administrative effort. What should you configure for AKS1? A. authorized IP address ranges B. an Application Gateway Ingress Controller (AGIC) C. a private endpoint D. a private cluster
A. authorized IP address ranges Explanation: The question asks for a solution to restrict administrator access to AKS1 to specific networks while minimizing administrative effort. Configuring authorized IP address ranges directly limits access to only the specified IP addresses or ranges, fulfilling the requirement with minimal configuration. Why other options are incorrect: * **B. an Application Gateway Ingress Controller (AGIC):** AGIC manages ingress traffic, but it doesn't directly control access to the AKS cluster's API server, which is what needs to be secured. It's about managing external access to applications *within* the cluster, not restricting access to the cluster's management plane itself. * **C. a private endpoint:** While a private endpoint provides private connectivity, it doesn't inherently restrict access based on specific IP ranges. It creates a private connection within a virtual network, but doesn't control *who* can connect within that network. * **D. a private cluster:** A private cluster limits API server access to a private virtual network. This is more restrictive than needed and doesn't directly address the requirement of controlling access from *specific* IP addresses or ranges within that private network. Note: The discussion shows some disagreement on the correct answer. While the majority leans towards A, some suggest D or C. The explanation above clarifies why A is the most efficient and directly addresses the prompt’s constraints.
116
**** [View Question](https://www.examtopics.com/discussions/databricks/view/147121-exam-az-500-topic-6-question-29-discussion/) You have an Azure subscription that contains a virtual machine named VM1. You have a network security group (NSG) named NSG1 that is associated to the network interface of VM1 and is configured as shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image785.png) Just-in-time (JIT) VM access is enabled on VM1 and has the following configurations: • Management ports: 3389, 22 • Maximum time range: 3 hours • Allowed source IP addresses: Any You activate the JIT rule and connect to VM1 by using SSH. For each of the following statements, select Yes if the statement is true, otherwise select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image786.png) **
** The correct answers are No, No, Yes. There is disagreement among the discussants regarding the first two questions. * **Statement 1: The NSG rules take precedence over the JIT access rules.** This is **No**. JIT access rules override NSG rules for the specified ports during the JIT access window. While the NSG might have deny rules, the JIT activation creates a temporary allowance that supersedes them. * **Statement 2: After 3 hours, the JIT access will be automatically disabled.** This is **No**. While the *maximum* time is 3 hours, the JIT access will remain active as long as the connection is maintained. If the connection is dropped, then a new JIT request is required. * **Statement 3: The SSH connection is allowed because port 22 is open in the JIT configuration.** This is **Yes**. Because port 22 (SSH) is explicitly listed as a management port in the JIT configuration, and JIT rules take precedence over the NSG rules, the SSH connection is permitted. **Why other options are incorrect:** The discussion shows conflicting opinions, primarily around the precedence of JIT rules over NSG rules. Some users incorrectly believe that the NSG rules would still take priority, leading to incorrect answers. The correct understanding is that JIT creates a temporary exception overriding the NSG during the activation period. The second question's answer is clear regarding connection maintenance for the time frame.
117
**** [View Question](https://www.examtopics.com/discussions/databricks/view/147143-exam-az-500-topic-6-question-30-discussion/) You have an on-premises network. You have an Azure subscription that contains the resources shown in the following table. | Resource | Type | Location | SKU | |---------------|-------------------|----------|----------| | VNet1 | Virtual Network | East US | | | VpnGw1 | VPN gateway | East US | | | VpnGw2 | VPN gateway | East US | | | VpnGw1AZ | VPN gateway | East US | | | VpnGw2AZ | VPN gateway | East US | | You plan to deploy a Site-to-Site (S2S) VPN between the on-premises network and VNet1. You need to recommend an Azure VPN Gateway SKU that meets the following requirements: • Supports 1-Gbps throughput • Minimizes costs What should you recommend? A. VpnGw1 B. VpnGw2 C. VpnGw1AZ D. VpnGw2AZ **
** B. VpnGw2 The discussion indicates that VpnGw2 is the correct answer because it supports 1 Gbps throughput, fulfilling the first requirement. While the specific pricing details aren't included in the provided text, the consensus is that VpnGw2 represents the most cost-effective option among those that meet the throughput requirement. The users cite Microsoft Azure pricing pages to support their selection of VpnGw2. **Why other options are incorrect:** * **A. VpnGw1:** This option is incorrect because it only supports 650 Mbps throughput, failing to meet the 1 Gbps requirement. * **C. VpnGw1AZ** and **D. VpnGw2AZ:** The discussion doesn't provide information about these SKUs' throughput capabilities or their pricing compared to VpnGw2. The lack of this information makes them less desirable choices compared to VpnGw2. Without additional information on their throughput and cost, they are not optimal selections. **Note:** The discussion shows unanimous agreement on the selection of VpnGw2 as the correct answer.
118
[View Question](https://www.examtopics.com/discussions/databricks/view/147144-exam-az-500-topic-7-question-13-discussion/) You have an Azure subscription that uses Microsoft Defender for Cloud. The subscription contains an instance of Azure Database for PostgreSQL. You need to ensure that an email alert is triggered when a suspected brute force attack on the database is detected. The solution must minimize administrative effort. What should you configure? A. the Azure Monitor activity log B. an Azure Monitor alert rule C. Microsoft Defender for open-source relational databases D. the PostgreSQL Audit extension (pgAudit)
C. Microsoft Defender for open-source relational databases. Microsoft Defender for open-source relational databases is designed to detect and alert on anomalous activities, including brute-force attacks, on databases like PostgreSQL. This directly addresses the requirement of the question to trigger email alerts upon detection of a brute force attack with minimal administrative overhead. The other options are not the optimal or most efficient solutions for this specific scenario. Why other options are incorrect: * **A. the Azure Monitor activity log:** While the activity log records events, it doesn't automatically trigger email alerts for specific security threats like brute-force attacks. Additional configuration (creating an alert rule, as in option B) would be necessary. * **B. an Azure Monitor alert rule:** An Azure Monitor alert rule *could* be configured to trigger on specific events from the activity log *if* those events are properly logged. However, this requires defining specific criteria and configuring the alert, adding more administrative effort than using a dedicated security solution like option C. * **D. the PostgreSQL Audit extension (pgAudit):** pgAudit logs database activity. Similar to option A, this requires additional configuration to forward the logs to a system that can process them and trigger email alerts. This adds administrative overhead. Note: The provided discussion shows a consensus that option C is the correct answer.
119
[View Question](https://www.examtopics.com/discussions/databricks/view/147158-exam-az-500-topic-3-question-69-discussion/) You have an Azure subscription that contains a virtual network named VNet1. VNet1 contains the subnets shown in the following table. ![Image](https://img.examtopics.com/az-500/image777.png) You create the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-500/image778.png) You plan to configure just-in-time (JIT) VM access for the virtual machines. The solution must minimize administrative effort. For which virtual machines can you configure JIT VM access? A. VM1 only B. VM1 and VM2 only C. VM1 and VM3 only D. VM1, VM2, and VM3 only E. VM1, VM2, VM3, and VM4
D. VM1, VM2, and VM3 only JIT VM access requires a Network Security Group (NSG) to be associated with the subnet or the VM's NIC to control inbound traffic. The JIT mechanism temporarily modifies the NSG rules to allow access within a specified time window. Based on the provided tables: * **VM1 and VM2** are in Subnet 1 and Subnet 2 respectively, both of which have NSGs associated. Therefore, JIT can be enabled for them. * **VM3** has an NSG directly associated with its NIC, so JIT can be enabled. * **VM4** is in Subnet 3, which lacks an associated NSG. Therefore, JIT cannot be configured for VM4. Therefore, only VM1, VM2, and VM3 meet the prerequisites for JIT VM access. WHY OTHER OPTIONS ARE INCORRECT: * **A, B, C:** These options are incomplete and do not include all VMs that meet the JIT prerequisites. * **E:** This option incorrectly includes VM4, which does not have an NSG associated either with its subnet or its NIC. NOTE: The discussion shows a consensus among users that option D is correct. No conflicting opinions are presented.
120
[View Question](https://www.examtopics.com/discussions/databricks/view/147172-exam-az-500-topic-3-question-68-discussion/) You are testing an Azure Kubernetes Service (AKS) cluster. The cluster is configured as shown in the exhibit. (The exhibit is an image and no text from the image is provided in the prompt.) You plan to deploy the cluster to production. You disable HTTP application routing. You need to implement application routing that will provide reverse proxy and TLS termination for AKS services by using a single IP address. What should you do? A. Create an AKS Ingress controller. B. Create an Azure Standard Load Balancer. C. Install the container network interface (CNI) plug-in. D. Create an Azure Basic Load Balancer.
A. Create an AKS Ingress controller. An AKS Ingress controller manages external access to Kubernetes services, typically using HTTP/HTTPS. It provides reverse proxy and TLS termination features, routing traffic based on URLs and handling requests using a single public IP address. This directly addresses the requirement for a single IP address, reverse proxy, and TLS termination. Why other options are incorrect: * **B. Create an Azure Standard Load Balancer:** While a Load Balancer can distribute traffic, it doesn't inherently provide the reverse proxy and TLS termination features needed. It's a more foundational networking component, not a solution for application routing in this context. * **C. Install the container network interface (CNI) plug-in:** A CNI plugin is responsible for network configuration *within* the Kubernetes cluster, not for external access and routing. It's unrelated to the problem described. * **D. Create an Azure Basic Load Balancer:** Similar to a Standard Load Balancer, a Basic Load Balancer lacks the reverse proxy and TLS termination capabilities required for the scenario. Note: The provided discussion only supports option A. No alternative or conflicting viewpoints were presented.
121
[View Question](https://www.examtopics.com/discussions/databricks/view/147276-exam-az-500-topic-4-question-124-discussion/) You have an Azure subscription that uses Microsoft Defender for Cloud. You have an Amazon Web Services (AWS) account. You need to add the AWS account to Defender for Cloud. What should you do first? A. From Defender for Cloud, configure the Environment settings. B. From the AWS account, enable a security hub. C. From Defender for Cloud, configure the Security solutions settings. D. From the Azure portal, add the AWS enterprise application.
A. From Defender for Cloud, configure the Environment settings. To add an AWS account to Microsoft Defender for Cloud, you must first configure the environment settings within Defender for Cloud. This allows you to onboard your AWS account by adding the environment and setting up necessary connectors for monitoring and security management. The provided Microsoft Learn documentation supports this: `https://learn.microsoft.com/en-us/azure/defender-for-cloud/quickstart-onboard-aws`. Why other options are incorrect: * **B. From the AWS account, enable a security hub:** While AWS Security Hub is a relevant security service, enabling it within the AWS account is not the *first* step in adding the AWS account to Defender for Cloud. The integration starts within the Azure portal and Defender for Cloud. * **C. From Defender for Cloud, configure the Security solutions settings:** Security solutions settings are configured *after* the AWS environment is added to Defender for Cloud. The initial step focuses on adding the environment itself. * **D. From the Azure portal, add the AWS enterprise application:** This option is not related to directly adding an AWS account for security monitoring through Defender for Cloud. This might apply to other Azure application integrations but not for the specified scenario. Note: The discussion shows unanimous agreement on answer A.
122
**** [View Question](https://www.examtopics.com/discussions/databricks/view/147315-exam-az-500-topic-2-question-121-discussion/) You have a Microsoft Entra tenant that uses Microsoft Entra Permissions Management and contains the accounts shown in the following table: ![Image](https://img.examtopics.com/az-500/image791.png) Which accounts will be listed as assigned to highly privileged roles on the Azure AD insights tab in the Entra Permissions Management portal? A. Admin1 only B. Admin2 and Admin3 only C. Admin2 and Admin4 only D. Admin1, Admin2, and Admin3 only E. Admin2, Admin3, and Admin4 only F. Admin1, Admin2, Admin3, and Admin4 **
** D. Admin1, Admin2, and Admin3 only **Explanation:** The question asks which accounts are assigned to *highly* privileged roles. Based on the provided information and the discussion, the following roles are considered highly privileged: * **Admin1 (Global Administrator):** This role has full control over the tenant, making it the highest privilege level. * **Admin2 (Privileged Role Administrator):** This role manages role assignments and elevated access for other administrators, inherently dealing with high-privilege accounts. * **Admin3 (Privileged Authentication Administrator):** This role manages authentication policies and security configurations, another area with high-privilege implications. Admin4 (User Administrator) is not considered highly privileged in this context because it focuses on managing user accounts rather than managing critical tenant-wide security or administrative access. Therefore, only Admin1, Admin2, and Admin3 would appear on the Azure AD insights tab listing accounts with highly privileged roles. **Why other options are incorrect:** * **A, B, C, E, F:** These options exclude at least one of the highly privileged accounts (Admin1, Admin2, or Admin3) or include the User Administrator (Admin4), which is not classified as highly privileged based on the provided information and context. The discussion highlights the Global Administrator, Privileged Role Administrator, and Privileged Authentication Administrator as highly privileged roles. **Note:** There is a minor point of disagreement potentially implied in the discussion. The original post leans towards the answer (D) without explicitly defining all highly privileged roles. The supporting comment does a good job of explaining why (D) is the best answer but lacks clear consensus on the definition of what constitutes "highly privileged" in this specific context. However, based on common understanding of Azure AD roles, the provided answer aligns with established definitions.
123
[View Question](https://www.examtopics.com/discussions/databricks/view/147317-exam-az-500-topic-2-question-123-discussion/) You have a Microsoft Entra tenant that contains three users named User1, User2, and User3. You configure Microsoft Entra Password Protection as shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image797.png) The users perform the following tasks: • User1 attempts to reset her password to C0nt0s0. • User2 attempts to reset her password to F@brikamHQ. • User3 attempts to reset her password to Pr0duct123. Which password reset attempts fail? A. User1 only B. User2 only C. User3 only D. User1 and User 3 only E. User1, User2, and User3
E. User1, User2, and User3 All three password reset attempts will fail. The provided image (which is not included in this text, but referred to in the question) shows a list of banned passwords which includes "Contoso", "Fabrikam", and "Product". User1's attempted password ("C0nt0s0") contains "Contoso" with minor variations (numbers replacing letters), User2's ("F@brikamHQ") contains "Fabrikam" with minor variations (symbols replacing letters), and User3's ("Pr0duct123") contains "Product" with minor variations (numbers replacing letters). Microsoft Entra Password Protection, according to the discussion, will flag these as banned passwords, regardless of the minor changes. The scoring system mentioned by AlPers reinforces this - even with the minor changes, the passwords are likely to score below the minimum required points. Therefore, options A, B, C, and D are incorrect because they do not account for all three password reset attempts failing due to the presence of banned words in each attempt. The discussion shows a consensus among users that all three attempts would fail.
124
**** [View Question](https://www.examtopics.com/discussions/databricks/view/147352-exam-az-500-topic-5-question-75-discussion/) You have a Microsoft Entra tenant named contoso.com. You collaborate with a partner organization that has a Microsoft Entra tenant named fabrikam.com. You need to create an allow list of cloud apps from fabrikam.com that can be used by the users in contoso.com. What should you do for contoso.com in the Microsoft Entra admin center? A. From Inbound access settings in Cross-tenant access settings, configure the B2B direct connect settings. B. From External collaboration settings, configure the Collaboration restrictions settings. C. From External collaboration settings, configure the Guest invite settings. D. From Outbound access settings in Cross-tenant access settings, configure the B2B collaboration settings. **
** D. From Outbound access settings in Cross-tenant access settings, configure the B2B collaboration settings. **Explanation:** Contoso.com (the home tenant) needs to access resources in fabrikam.com (the resource tenant). Outbound access settings control which external apps internal accounts (from contoso.com) can access. Therefore, configuring B2B collaboration settings under outbound access allows for the creation of an allow list for specific apps within fabrikam.com. **Why other options are incorrect:** * **A:** Inbound settings control access to *contoso.com's* apps by external users from fabrikam.com, not the other way around. * **B & C:** Collaboration restrictions and guest invite settings manage broader collaboration policies, not specifically allowing access to a curated list of apps from a partner tenant. **Note:** There is disagreement in the provided discussion regarding the correct answer. Some users suggest option A, while others support option D. The explanation above reflects the suggested answer and a logical interpretation of the scenario.
125
[View Question](https://www.examtopics.com/discussions/databricks/view/147365-exam-az-500-topic-2-question-120-discussion/) You have a Microsoft Entra tenant that contains the groups shown in the following table. ![Image](https://img.examtopics.com/az-500/image773.png) From the Azure portal, you configure a group expiration policy that has a lifetime of 180 days. Which groups will be deleted after 180 days of inactivity, and what is the maximum amount of time you have to restore a deleted group? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image774.png)
Only Group 3 will be deleted after 180 days of inactivity, and deleted groups can be restored within 30 days. Explanation: The question states that a group expiration policy with a 180-day lifetime is configured. Based on the provided image (image773.png), only Group 3 is a Microsoft 365 group. Group expiration policies only apply to Microsoft 365 groups within Microsoft Entra ID. Therefore, only Group 3 is subject to deletion after 180 days of inactivity. The Microsoft documentation cited in the discussion confirms that deleted Microsoft 365 groups can be restored within 30 days. Why other options are incorrect: The question asks which groups will be deleted and the restoration timeframe. The other groups (Group 1 and Group 2) are not Microsoft 365 groups and therefore are not affected by the group expiration policy. Any timeframe other than 30 days for restoration is incorrect based on the provided information and referenced Microsoft documentation. Note: The provided answer reflects the consensus from the discussion. There is no explicit disagreement presented in the discussion, but the discussion only focuses on the correct answer without explicitly rejecting other possible answers.
126
**** [View Question](https://www.examtopics.com/discussions/databricks/view/147366-exam-az-500-topic-2-question-122-discussion/) You have a Microsoft Entra tenant that contains the user shown in the following table. | User | Group1 | Group2 | | ----------- | ----------- | ----------- | | User1 | Yes | No | | User2 | Yes | Yes | | User3 | No | Yes | You configure a Conditional Access policy that has the following settings: • Name:CAPolicy1 • Assignments o Users or workload identities: Group1 o Target resources: All cloud apps • Access controls o Grant access: Require multifactor authentication From Microsoft Authenticator settings for the tenant, the Enable and Target settings are configured as shown in the following image: *(Image shows Microsoft Authenticator settings with Group1 and Group2 enabled for passwordless authentication. Exact details not available without image.)* From Microsoft Authenticator settings for the tenant, the Configure settings are configured as shown in the following image: *(Image shows Microsoft Authenticator settings with number matching enforced for Group2. Exact details not available without image.)* For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. | User | Required to use number matching | | ----------- | ----------- | | User1 | | | User2 | | | User3 | |
No, Yes, Yes **Explanation:** * **User1:** This user is a member of Group1, which is subject to Conditional Access Policy CAPolicy1 requiring MFA. However, User1 is *not* a member of Group2, where number matching is enforced. Therefore, number matching is not required for User1. * **User2:** This user is a member of both Group1 (subject to CAPolicy1 and MFA) and Group2 (where number matching is enforced). Therefore, number matching is required for User2. * **User3:** This user is not a member of Group1, so CAPolicy1 does not apply. However, User3 is a member of Group2, where number matching is enabled in the Microsoft Authenticator settings. If User3 uses any form of MFA, number matching will be enforced. **Why other options are incorrect:** The provided discussion shows disagreement on the answer for User3, with some suggesting "No" because CAPolicy1 doesn't apply. However, the Microsoft Authenticator settings independently enforce number matching for Group2, regardless of Conditional Access policies. The answer reflects the correct interpretation considering both the Conditional Access Policy and the Microsoft Authenticator settings.
127
**** [View Question](https://www.examtopics.com/discussions/databricks/view/147538-exam-az-500-topic-5-question-76-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. | Virtual Network | Subnet | |---|---| | VNet1 | Subnet1 | | VNet1 | Subnet2 | | VNet1 | Subnet3 | NSG1 rules restrict access to the internet from Subnet3. The subscription contains the function apps shown in the following table. | Function App | Virtual Network | |---|---| | App1 | VNet1 | | App2 | VNet1 | Virtual network integration has the default settings. You need to configure network access for App1 and App2 to meet the following requirements: * Deny inbound access to App1 from Subnet1 and allow inbound access from Subnet2. * Deny outbound access from App2 to the internet. What should you do for each requirement? **
** To meet the requirements, you should perform the following actions: * **For App1:** Configure IP restrictions in the App1 settings. Allow inbound access only from the IP addresses associated with Subnet2 and deny inbound access from the IP addresses associated with Subnet1. This leverages Azure App Service's built-in IP restriction capabilities. * **For App2:** In the App2 portal, navigate to Networking > Virtual network integration. Uncheck the "Outbound internet traffic" setting. This disables outbound internet access for App2. The provided suggested answer confirms this approach. **Why other options are incorrect:** The question specifically asks for actions to meet the requirements within the Azure environment using its built-in networking features. Other solutions that might involve third-party tools or complex network configurations are not applicable to the scenario described in the prompt. The discussion provides links to Microsoft documentation supporting the described approach for both IP restrictions and disabling outbound internet traffic via the App Service portal. There is no disagreement among the users on the correct approach in the provided discussion.
128
[View Question](https://www.examtopics.com/discussions/databricks/view/147621-exam-az-500-topic-6-question-27-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image779.png) You plan to use service endpoints and service endpoint policies. Which resources can be accessed by using a service endpoint, and which resources support service endpoint policies? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image780.png)
Based on the provided information and the suggested answer image (image781.png), Azure Storage and Azure App Service can be accessed using a service endpoint. Only Azure Storage supports service endpoint policies. The discussion confirms this answer, with multiple users agreeing that the suggested answer is correct. `golitech` further clarifies that only a few Azure services support Service Endpoint Policies, primarily Azure Storage-related services (Blob Storage, Table Storage, Queue Storage, File Storage, and Azure Key Vault). There is no disagreement expressed in the discussion regarding these points. There is no information provided to evaluate why other options may be incorrect, as no other options are presented in the question itself. The question is focused on identifying which resources from the given table support service endpoints and service endpoint policies.
129
[View Question](https://www.examtopics.com/discussions/databricks/view/147653-exam-az-500-topic-4-question-125-discussion/) You have an Azure subscription that contains an Azure key vault. You create a storage account named storage1. You plan to store data in the following storage1 services: • Azure Files • Azure Blob storage • Azure Table storage • Azure Queue storage For which two services can you configure data encryption by using the keys stored in the key vault? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Blob storage B. Table storage C. Queue storage D. Azure Files
A and D (Blob storage and Azure Files) Azure allows you to configure customer-managed encryption keys (CMKs) for both Blob storage and Azure Files using keys stored in an Azure Key Vault. This enables you to manage your encryption keys independently, enhancing security. While the documentation referenced in the discussion supports this, a user comment points out that current Azure Storage Account creation allows selecting either "blobs and files" or "all services" for CMK protection and that this setting cannot be changed after creation. This suggests some evolution in Azure's feature implementation. Why other options are incorrect: * **B. Table storage and C. Queue storage:** While Azure offers encryption for Table storage and Queue storage, the question specifically asks about using keys from a *key vault* for encryption. While possible to use Microsoft-managed keys, the question restricts the answer to customer-managed keys from a key vault. Therefore, these options are incorrect within the constraints of the question.
130
**** [View Question](https://www.examtopics.com/discussions/databricks/view/147780-exam-az-500-topic-3-question-70-discussion/) You have an Azure subscription. You plan to deploy the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-500/image802.png) *(Image shows a table of VMs with OS, size, etc.)* You need to identify the virtual machines and operating systems that can be deployed as confidential virtual machines. Which Windows virtual machines and which Linux virtual machines should you identify? ![Image](https://img.examtopics.com/az-500/image803.png) *(Image shows a table with VM numbers: VM1, VM2, VM3, VM4, VM5, VM6)* **
** The correct answer is Windows VMs: VM1 and VM3; Linux VMs: VM5 and VM6. Azure Confidential VMs require specific VM sizes and operating systems. The discussion highlights that VMs with a "C" in their size designation (e.g., Dcv5, Ecv5) support Confidential Computing. Furthermore, only certain Windows and Linux distributions are compatible. Based on user comments and the provided links, VM1 and VM3 (Windows) and VM5 and VM6 (Linux) meet these criteria. The specifics of the operating systems supported are not fully detailed in the provided text, but the consensus in the discussion points to this answer. **Why other options are incorrect:** The provided discussion does not explicitly list all VMs and their OS but through process of elimination based on user comments and the general consensus on the use of a 'C' in the VM size and acceptable OS, other options are ruled out. VM2 is excluded because of its unsupported Windows Server 2022 Standard version. Other VMs (if any in the original image are not listed here) are likely excluded for not meeting the size and/or OS requirements. There's some disagreement in the level of detail of the criteria, but the final answer remains consistent.
131
[View Question](https://www.examtopics.com/discussions/databricks/view/147815-exam-az-500-topic-4-question-37-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image805.png) You plan to implement Microsoft Defender for Cloud. Which resources can be protected by using Defender for Cloud? A. VM1 only B. VM1 and storage1 only C. Vault1 and storage1 only D. VM1, Vault1, and storage1 only E. VNet1, VM1, Vault1, and storage1
E. VNet1, VM1, Vault1, and storage1 Microsoft Defender for Cloud can protect a wide range of Azure resources, including virtual networks (VNet), virtual machines (VMs), storage accounts, and Key Vaults. The provided image shows a VNet (VNet1), a VM (VM1), a Key Vault (Vault1), and a storage account (storage1). Therefore, Defender for Cloud can protect all of these resources. Why other options are incorrect: * **A, B, C, and D:** These options exclude at least one of the resources shown in the image that are protectable by Defender for Cloud. Defender for Cloud's protection extends beyond just VMs and storage accounts. Note: The discussion section shows unanimous agreement on answer E.
132
[View Question](https://www.examtopics.com/discussions/databricks/view/147861-exam-az-500-topic-7-question-14-discussion/) HOTSPOT Your company uses cloud-based resources from the following platforms: • Azure • Amazon Web Services (AWS) • Google Cloud Platform (GCP) You plan to implement Microsoft Defender for Cloud. On which platforms can you use Defender for Cloud to protect containers and storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image789.png)
For containers: Azure, AWS, and GCP. For storage: Azure, AWS, and GCP. Explanation: While Microsoft Defender for Storage *primarily* focuses on Azure, the discussion highlights that Microsoft Defender for *Cloud* (which includes Defender for Storage) extends its multi-cloud capabilities to AWS and GCP for both containers and storage. The provided links to Microsoft documentation support this broader functionality of Defender for Cloud. Note: There is some disagreement in the discussion regarding the storage aspect. One user claims storage protection is limited to Azure, while another correctly points out that Defender for Cloud's reach extends to AWS and GCP as well. The answer provided reflects the more accurate and comprehensive understanding based on the later user's input and referenced Microsoft documentation.
133
**** [View Question](https://www.examtopics.com/discussions/databricks/view/147892-exam-az-500-topic-5-question-74-discussion/) You have an Azure subscription. The subscription contains a virtual network named VNet1 that contains the subnets shown in the following table. ![Image](https://img.examtopics.com/az-500/image807.png) The subscription contains the function apps shown in the following table. ![Image](https://img.examtopics.com/az-500/image808.png) The outbound traffic of which app is controlled by using NSG1? A. App4 only B. App3 and App4 only C. App2, App3, and App4 only D. App1, App2, App3, and App4 **
** C. App2, App3, and App4 only **Explanation:** Based on the provided images, NSG1 is associated with Subnet2. App2, App3, and App4 are all deployed to Subnet2. Therefore, their outbound traffic is controlled by NSG1. App1 is deployed to Subnet1, and thus its traffic is not controlled by NSG1. **Why other options are incorrect:** * **A. App4 only:** This is incorrect because App2 and App3 are also in Subnet2, which is associated with NSG1. * **B. App3 and App4 only:** This is incorrect because App2 is also in Subnet2. * **D. App1, App2, App3, and App4:** This is incorrect because App1 resides in Subnet1 and is not controlled by NSG1. **Note:** The discussion shows disagreement on the correct answer, with some users initially selecting option D. However, based on the provided network configuration diagrams, option C is the accurate answer reflecting the subnet and NSG associations.
134
[View Question](https://www.examtopics.com/discussions/databricks/view/147987-exam-az-500-topic-5-question-73-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. | VM Name | OS | Application Control | |---|---|---| | VM1 | Windows Server 2019 | None | | VM2 | Windows Server 2022 | None | | VM3 | Windows Server Core | None | | VM4 | Windows Server 2019 | AppLocker | You are configuring Microsoft Defender for Servers. You plan to enable adaptive application controls to create an allowlist of known-safe apps on the virtual machines. Which virtual machines support the use of adaptive application controls? A. VM1 and VM2 only B. VM2 and VM4 only C. VM2 and VM3 only D. VM1, VM2, VM3, and VM4
A. VM1 and VM2 only. Adaptive Application Controls in Microsoft Defender for Servers are designed to create allowlists of safe applications. However, this feature does *not* support Windows Server Core installations (like VM3). Additionally, it is incompatible with pre-existing AppLocker policies (like on VM4) because both perform similar application whitelisting functions, leading to conflicts. Therefore, only VM1 (Windows Server 2019) and VM2 (Windows Server 2022), which have neither of these limitations, support Adaptive Application Controls. Why other options are incorrect: * **B. VM2 and VM4 only:** VM4 has an AppLocker policy, which conflicts with Adaptive Application Controls. * **C. VM2 and VM3 only:** VM3 is a Windows Server Core installation, which is not supported by Adaptive Application Controls. * **D. VM1, VM2, VM3, and VM4:** This option includes VM3 (Windows Server Core) and VM4 (AppLocker enabled), both of which are incompatible with Adaptive Application Controls. Note: The discussion shows unanimous agreement on the correct answer.
135
**** [View Question](https://www.examtopics.com/discussions/databricks/view/148279-exam-az-500-topic-6-question-28-discussion/) You have an Azure App Service web app named App1. Subnet 2 contains a virtual machine named VM1. The provided images show App1's configuration within a virtual network. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. (Note: The images are not provided here, but the question refers to them showing App1's configuration and the existence of Subnet 2 containing VM1). **
** The provided text does not give the specific content of the dropdown menus or the statements to be completed. Therefore, a complete and specific answer cannot be provided. However, based on the discussion, we can infer some aspects of the potential answer. The suggested answer (image784.png) which is unavailable, would likely involve selecting options related to Network Security Groups (NSGs) and Gateway-required VNet integration. An NSG would be used to control traffic to and from App1, and gateway-required VNet integration is relevant if App1 needs to communicate with VM1 in a different subnet (or VNet). **Why other options are incorrect (cannot be definitively stated):** Without seeing the content of the multiple-choice options within the original question, it is impossible to explain why other options would be incorrect. The missing images are crucial to answering this. **Note:** The answer acknowledges the lack of access to the crucial visual information (images 782, 783, and 784) and the multiple-choice options, limiting the completeness of the response. The response relies on the provided discussion to offer a general understanding of the potential correct approach.
136
[View Question](https://www.examtopics.com/discussions/databricks/view/148510-exam-az-500-topic-6-question-31-discussion/) You have an Azure subscription that contains a virtual network named VNet1. VNet1 contains the subnets shown in the following table. ![Image](https://img.examtopics.com/az-500/image813.png) The subscription contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-500/image814.png) VM3 contains a service that listens for connections on port 8080. For VM1, you configure just-in-time (JIT) VM access as shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image815.png) For each of the following statement, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image816.png)
The correct answers are No, No, No. There is some disagreement in the discussion, but the prevailing correct answer is based on the provided information and constraints. 1. **Can you establish a RDP connection from VM1 to VM3? No.** VM3 only exposes port 8080, not RDP (port 3389). JIT access is configured on VM1, but that's irrelevant since VM3 doesn't have RDP enabled. The discussion correctly identifies this. 2. **Can you establish a RDP connection from VM2 to VM1? No.** VM2's IP address (172.16.0.5) is not within the allowed source IP ranges for JIT access configured on VM1 (10.10.0.0/24 and 192.168.10.0/24). While NSG rules aren't explicitly detailed, the lack of explicit permission coupled with the JIT configuration strongly suggests the connection would be blocked. The discussion accurately reflects this. 3. **Can you establish a RDP connection from VM3 to VM1? No.** Although VM3's IP address (10.10.1.5) falls within the allowed JIT source IP range, a JIT request is required to open the connection. Simply having the IP address allowed isn't sufficient; an explicit JIT access request must be made. The discussion correctly points this out. WHY OTHER OPTIONS ARE INCORRECT: The discussion itself presents the reasoning for why a "Yes" answer for each statement is incorrect. The provided suggested answer supports this conclusion. The user, Hot_156, suggests a "Yes" for the first statement, failing to consider the lack of RDP enabled on VM3. This highlights the importance of carefully considering all aspects of the problem.
137
**** [View Question](https://www.examtopics.com/discussions/databricks/view/148758-exam-az-500-topic-2-question-52-discussion/) You have a Microsoft Entra tenant named contoso.com. You collaborate with a partner organization that has a Microsoft Entra tenant named fabrikam.com. Fabrikam.com has multi-factor authentication (MFA) enabled for all users. Contoso.com has the Cross-tenant access settings configured as shown in the Cross-tenant access settings exhibit. Contoso.com has the External collaboration settings configured as shown in the External collaboration settings exhibit. You create a Conditional Access policy named CAPolicy1 with the following settings: Assignments: Guest or external users: B2B collaboration guest users; Target resources: Include: All cloud apps; Access controls: Grant access; Require device to be marked as compliant; Require multi-factor authentication; Enable policy: On. For each of the following statements, select Yes if the statement is true, otherwise select No. (Note that the provided images in the original question are not included here, as they are not provided in the text, only referenced. The images show the settings for Cross-tenant access and External collaboration, and the final table requires a Yes/No answer for three statements). Statement 1: Guest users from fabrikam.com will be prompted for MFA. Statement 2: Guest users from fabrikam.com will be required to register a compliant device. Statement 3: The policy will apply to all guest users regardless of their tenant. **
** Yes-No-Yes * **Statement 1: Yes.** Because Fabrikam.com users already have MFA enabled and the Conditional Access Policy (CAP) requires MFA, guest users from Fabrikam will be prompted for MFA. * **Statement 2: No.** While the CAP requires a compliant device, the External Collaboration settings likely allow guests to access resources without registering a compliant device. The question does not give enough information to conclude otherwise. This is based on the general understanding that external collaboration often relaxes device compliance requirements for ease of access. The image depicting External Collaboration settings is necessary to confirm. * **Statement 3: Yes.** The policy targets "all guest users," irrespective of their tenant. **Why other options are incorrect:** The provided discussion shows some disagreement on the second statement. The correct answer reflects a nuanced understanding of how Conditional Access policies interact with pre-existing security settings (MFA on Fabrikam.com) and common practices for external collaboration (often less strict device compliance). The absence of the image makes a definitive answer about statement 2 dependent on assumptions about the external collaboration settings. Additional information concerning the image content would be necessary to definitively confirm the answers, especially statement 2.
138
[View Question](https://www.examtopics.com/discussions/databricks/view/17774-exam-az-500-topic-5-question-21-discussion/) You have a hybrid configuration of Azure Active Directory (Azure AD). All users have computers that run Windows 10 and are hybrid Azure AD joined. You have an Azure SQL database that is configured to support Azure AD authentication. Database developers must connect to the SQL database by using Microsoft SQL Server Management Studio (SSMS) and authenticate by using their on-premises Active Directory account. You need to tell the developers which authentication method to use to connect to the SQL database from SSMS. The solution must minimize authentication prompts. Which authentication method should you instruct the developers to use? A. SQL Login B. Active Directory — Universal with MFA support C. Active Directory — Integrated D. Active Directory — Password
C. Active Directory — Integrated Explanation: Because the environment is a hybrid Azure AD setup with Windows 10 machines that are hybrid Azure AD joined and on-premises Active Directory is in use, Active Directory — Integrated authentication is the most efficient method. This method leverages Kerberos tickets from the user's already established domain login on their Windows 10 workstation to authenticate to the Azure SQL database. This minimizes prompts as it doesn't require re-authentication. Why other options are incorrect: * **A. SQL Login:** This requires separate SQL Server login credentials, adding extra management overhead and authentication steps. * **B. Active Directory — Universal with MFA support:** While this supports hybrid scenarios, it still involves an extra authentication step (MFA), contradicting the requirement to minimize authentication prompts. * **D. Active Directory — Password:** This method requires the user to enter their password, which is less secure and increases the risk of credential compromise. It also violates the goal of minimizing prompts. Note: The discussion shows unanimous agreement on option C as the correct answer.
139
[View Question](https://www.examtopics.com/discussions/databricks/view/18007-exam-az-500-topic-2-question-10-discussion/) You have an Azure subscription named Sub1 that is associated with an Azure Active Directory (Azure AD) tenant named contoso.com. An administrator named Admin1 has access to the following identities: ✑ An OpenID-enabled user account ✑ A Hotmail account ✑ An account in contoso.com ✑ An account in an Azure AD tenant named fabrikam.com You plan to use Azure Account Center to transfer the ownership of Sub1 to Admin1. To which accounts can you transfer the ownership of Sub1? A. contoso.com only B. contoso.com, fabrikam.com, and Hotmail only C. contoso.com and fabrikam.com only D. contoso.com, fabrikam.com, Hotmail, and OpenID-enabled user account
A. contoso.com only Explanation: Azure subscriptions can only be transferred to accounts within the same Azure AD tenant. Since Sub1 is associated with contoso.com, only the account within contoso.com can receive ownership. The other accounts (fabrikam.com, Hotmail, and OpenID-enabled) are in different tenants and therefore cannot receive the ownership transfer. Why other options are incorrect: * **B. contoso.com, fabrikam.com, and Hotmail only:** Incorrect because Hotmail and fabrikam.com accounts are not within the contoso.com tenant. * **C. contoso.com and fabrikam.com only:** Incorrect because the fabrikam.com account is in a different tenant. * **D. contoso.com, fabrikam.com, Hotmail, and OpenID-enabled user account:** Incorrect because Hotmail and fabrikam.com accounts are in different tenants, and OpenID accounts are not directly managed within Azure AD in the same way. Note: While some discussion comments initially suggested options C, there is a consensus among other comments and the provided suggested answer that option A is correct based on the standard Azure subscription ownership transfer rules.
140
[View Question](https://www.examtopics.com/discussions/databricks/view/18153-exam-az-500-topic-3-question-47-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0028900001.png) The subscription contains the virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0028900002.png) On NIC1, you configure an application security group named ASG1. On which other network interfaces can you configure ASG1? A. NIC2 only B. NIC2, NIC3, NIC4, and NIC5 C. NIC2 and NIC3 only D. NIC2, NIC3, and NIC4 only
C. NIC2 and NIC3 only Explanation: Application Security Groups (ASGs) in Azure can only be associated with network interfaces within the *same* virtual network. Examining the provided tables, NIC1, NIC2, and NIC3 are all in VNET1. Therefore, ASG1 (configured on NIC1 in VNET1) can only be added to NIC2 and NIC3. NIC4 and NIC5 are in different virtual networks (VNET2 and VNET3 respectively) and cannot be associated with ASG1. Why other options are incorrect: * **A. NIC2 only:** This is incorrect because NIC3 is also in the same virtual network as NIC1 and ASG1. * **B. NIC2, NIC3, NIC4, and NIC5:** This is incorrect because NIC4 and NIC5 reside in different virtual networks. * **D. NIC2, NIC3, and NIC4 only:** This is incorrect because NIC4 is in a different virtual network than NIC1. Note: The provided answer (C) is based on the understanding that ASGs are limited to the scope of a single virtual network. There is no conflicting information presented in the discussion or content.
141
[View Question](https://www.examtopics.com/discussions/databricks/view/18667-exam-az-500-topic-3-question-12-discussion/) You have an Azure subscription named Sub1. Sub1 contains a virtual network named VNet1 that contains one subnet named Subnet1. Subnet1 contains an Azure virtual machine named VM1 that runs Ubuntu Server 18.04. You create a service endpoint for Microsoft.Storage in Subnet1. You need to ensure that when you deploy Docker containers to VM1, the containers can access Azure Storage resources by using the service endpoint. What should you do on VM1 before you deploy the container? A. Create an application security group and a network security group (NSG). B. Edit the docker-compose.yml file. C. Install the container network interface (CNI) plug-in.
C. Install the container network interface (CNI) plug-in. The correct answer is C because a CNI plugin is required to allow the Docker containers to utilize the network configuration of the underlying virtual machine (VM). This includes leveraging the service endpoint created in Subnet1 for access to Azure Storage. Without a CNI plugin, the containers wouldn't be able to obtain IP addresses from the VNet or use the service endpoint. Why other options are incorrect: * **A. Create an application security group and a network security group (NSG):** While NSGs and ASGs are important for network security, they don't directly enable the containers to use the service endpoint. The service endpoint is already configured at the subnet level. Security groups control *access* to resources, but the CNI plugin is needed for the containers to even be *part* of the network. * **B. Edit the docker-compose.yml file:** The `docker-compose.yml` file defines the configuration of the Docker containers. While it could be modified to specify networking settings *within* the containers, this doesn't address the fundamental issue of how the containers integrate with the VM's network and service endpoint. The CNI plugin is the essential layer that enables that integration. Note: There is some disagreement in the discussion regarding the exact purpose of a CNI plugin. While some commenters correctly identify its role in network integration, others seem to oversimplify its function to IP address assignment. The primary role of a CNI plugin is to integrate the containers into the host's network and allow them to participate in it – which is crucial for accessing the service endpoint.
142
[View Question](https://www.examtopics.com/discussions/databricks/view/18804-exam-az-500-topic-4-question-39-discussion/) You have an Azure subscription named Sub1 that contains an Azure Log Analytics workspace named LAW1. You have 100 on-premises servers that run Windows Server 2012 R2 and Windows Server 2016. The servers connect to LAW1. LAW1 is configured to collect security-related performance counters from the connected servers. You need to configure alerts based on the data collected by LAW1. The solution must meet the following requirements: ✑ Alert rules must support dimensions. ✑ The time it takes to generate an alert must be minimized. ✑ Alert notifications must be generated only once when the alert is generated and once when the alert is resolved. Which signal type should you use when you create the alert rules? A. Log B. Log (Saved Query) C. Metric D. Activity Log
C. Metric Metric alerts are the best choice for this scenario because they meet all three requirements: * **Alert rules must support dimensions:** Metric alerts inherently support dimensions, allowing for granular filtering and alerting based on various attributes. * **The time it takes to generate an alert must be minimized:** Metric alerts generally have faster alert generation times compared to log-based alerts, which often involve querying large datasets. * **Alert notifications must be generated only once when the alert is generated and once when the alert is resolved:** Metric alerts are designed to trigger only once at the start of an alert condition and again when the condition is resolved. A, B, and D are incorrect because: * **A. Log:** Log alerts are suitable for analyzing log data, but they may not be as efficient for performance counter data and might not provide the required speed and dimensional support. * **B. Log (Saved Query):** Similar to option A, this uses log data and may not be as optimized for performance counters or provide the same level of efficiency in alert generation. * **D. Activity Log:** Activity logs record management operations within Azure, not performance counters from on-premises servers. Note: The provided discussion strongly supports answer C. No conflicting opinions are presented.
143
**** [View Question](https://www.examtopics.com/discussions/databricks/view/18818-exam-az-500-topic-5-question-14-discussion/) Your company has an Azure Active Directory (Azure AD) tenant named contoso.com. The company is developing an application named App1. App1 will run as a service on a server that runs Windows Server 2016. App1 will authenticate to contoso.com and access Microsoft Graph to read directory data. You need to delegate the minimum required permissions to App1. Which three actions should you perform in sequence from the Azure portal? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: (The image contains the following options: Create an app registration; Add a delegated permission; Add an application permission; Grant admin consent; Configure an application proxy) **
** The correct sequence is: 1. **Create an app registration:** This is the first step; you must register App1 in Azure AD to obtain an application ID and other necessary credentials. 2. **Add an application permission:** Because App1 runs as a service without a signed-in user, it needs application permissions to access Microsoft Graph. Delegated permissions are for apps acting on behalf of a specific user. 3. **Grant admin consent:** This step is necessary to finalize the permission grant. The admin must consent to the permissions requested by App1. **Why other options are incorrect:** * **Add a delegated permission:** Delegated permissions are used when an application acts on behalf of a signed-in user, which is not the case for App1. * **Configure an application proxy:** Application Proxy is for securing access to on-premises applications; it's not relevant to granting permissions to access Microsoft Graph. **Note:** The discussion section shows overwhelming agreement on this answer sequence.
144
[View Question](https://www.examtopics.com/discussions/databricks/view/18819-exam-az-500-topic-5-question-15-discussion/) Your company has an Azure subscription named Sub1 that is associated to an Azure Active Directory (Azure AD) tenant named contoso.com. The company develops a mobile application named App1. App1 uses the OAuth 2 implicit grant type to acquire Azure AD access tokens. You need to register App1 in Azure AD. What information should you obtain from the developer to register the application? A. a redirect URI B. a reply URL C. a key D. an application ID
A. a redirect URI The OAuth 2 implicit grant flow requires a redirect URI. This is the URL where Azure AD will send the access token after successful authentication. The mobile application needs to be configured to listen on this URI to receive the token. Without it, the application cannot receive the authorization and will fail to function. Why other options are incorrect: * **B. a reply URL:** While similar in concept, "reply URL" is not the standard terminology used in OAuth 2. The correct term is "redirect URI." * **C. a key:** A key is required for other authentication flows, but not specifically for registering an app using the implicit grant flow. While an application secret *might* be used later for other purposes (though generally not recommended for public client applications like mobile apps using the implicit flow), it's not needed *at the registration stage*. * **D. an application ID:** The application ID is *generated* during the registration process, not something you obtain *from* the developer beforehand. Note: The discussion shows overwhelming agreement that the correct answer is A.
145
[View Question](https://www.examtopics.com/discussions/databricks/view/18820-exam-az-500-topic-5-question-16-discussion/) From the Azure portal, you are configuring an Azure policy. You plan to assign policies that use the DeployIfNotExist, AuditIfNotExist, Append, and Deny effects. Which effect requires a managed identity for the assignment? A. AuditIfNotExist B. Append C. DeployIfNotExist D. Deny
C. DeployIfNotExist The correct answer is C because the `DeployIfNotExist` policy effect requires a managed identity to deploy resources. When Azure Policy evaluates a `DeployIfNotExist` policy and determines a resource needs to be created, it uses a managed identity to perform the deployment. This managed identity is automatically created by Azure Policy for each assignment. The other options (AuditIfNotExist, Append, and Deny) do not require a managed identity for their operations. Other options are incorrect because: * **A. AuditIfNotExist:** This policy effect only logs whether a resource exists or not; it doesn't require deployment or modification. * **B. Append:** This effect modifies existing resources; it doesn't require a managed identity for this modification. While it may leverage a managed identity in certain complex scenarios, it's not a requirement in the way `DeployIfNotExist` is. * **D. Deny:** This policy effect simply prevents the creation of resources that violate the policy. It does not require deploying resources and therefore does not need a managed identity. Note: While there's overwhelming consensus in the provided discussion that the answer is C, there is no explicit, official Microsoft documentation directly stating that only `DeployIfNotExist` requires a managed identity *for the assignment* in this specific context (other policy effects might use managed identities for other actions). The provided explanations rely on the generally understood behavior of these policy effects.
146
[View Question](https://www.examtopics.com/discussions/databricks/view/18821-exam-az-500-topic-5-question-17-discussion/) You have an Azure subscription named Sub1 that is associated with an Azure Active Directory (Azure AD) tenant named contoso.com. You plan to implement an application that will consist of the resources shown in the following table. | Resource | Type | |--------------|----------------| | CosmosDB1 | Azure Cosmos DB | | WebApp1 | Web App | Users will authenticate by using their Azure AD user account and access the Cosmos DB account by using resource tokens. You need to identify which tasks will be implemented in CosmosDB1 and WebApp1. Which task should you identify for each resource? NOTE: Each correct selection is worth one point.
CosmosDB1: Create database users and generate resource tokens. WebApp1: Authenticate Azure AD users and relay resource tokens. Explanation: Resource tokens provide a secure way for clients to access Cosmos DB. CosmosDB1 is responsible for creating the users within the database and generating these tokens based on the permissions granted. WebApp1 acts as an intermediary; it authenticates the user via Azure AD and then requests and relays the appropriate resource token to the client application allowing access to the Cosmos DB resources. This is a common pattern for securing access to Cosmos DB. The provided images and discussion heavily support this answer. Why other options are incorrect: The question specifically asks for tasks implemented *within* CosmosDB1 and WebApp1. There's no mention of other components or services that might be used for authentication or token management outside of this specified architecture. The discussion thread shows widespread agreement on this solution. While some comments touch on the use of a "mobile application," the question doesn't explicitly limit the application type.
147
[View Question](https://www.examtopics.com/discussions/databricks/view/18822-exam-az-500-topic-5-question-20-discussion/) You have an Azure SQL database. You implement Always Encrypted. You need to ensure that application developers can retrieve and decrypt data in the database. Which two pieces of information should you provide to the developers? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. a stored access policy B. a shared access signature (SAS) C. the column encryption key D. user credentials E. the column master key
The correct answers are C and E: the column encryption key and the column master key. Always Encrypted uses a layered approach to encryption. The column master key (CMK) protects the column encryption keys (CEKs). The application needs both keys to decrypt data. The CMK is used to decrypt the CEK, which is then used to decrypt the actual data in the column. Providing only the CEK or only the CMK would not allow decryption. Options A, B, and D are incorrect: * **A. a stored access policy:** This relates to access control, not the decryption process itself. Always Encrypted handles data encryption and decryption regardless of access policies. * **B. a shared access signature (SAS):** SAS tokens are used for authorization and access to Azure Storage resources, not for decrypting data protected by Always Encrypted. * **D. user credentials:** User credentials grant access to the database but don't provide the keys needed for decryption. Note: While the discussion shows a consensus on C and E, it does not provide a detailed justification. The explanation above is based on the standard understanding of Always Encrypted's key hierarchy.
148
**** [View Question](https://www.examtopics.com/discussions/databricks/view/18829-exam-az-500-topic-2-question-5-discussion/) Your network contains an on-premises Active Directory domain named corp.contoso.com. You have an Azure subscription named Sub1 that is associated to an Azure Active Directory (Azure AD) tenant named contoso.com. You sync all on-premises identities to Azure AD. You need to prevent users who have a givenName attribute that starts with TEST from being synced to Azure AD. The solution must minimize administrative effort. What should you use? A. Synchronization Rules Editor B. Web Service Configuration Tool C. the Azure AD Connect wizard D. Active Directory Users and Computers **
** A. Synchronization Rules Editor The Synchronization Rules Editor allows for the creation of custom filtering rules based on LDAP attributes like `givenName`. This enables you to prevent users with specific attribute values (in this case, `givenName` starting with "TEST") from syncing to Azure AD without needing to manually modify each user object. This directly addresses the requirement to minimize administrative effort. **Why other options are incorrect:** * **B. Web Service Configuration Tool:** This tool is not directly involved in controlling the synchronization of on-premises Active Directory to Azure AD. * **C. the Azure AD Connect wizard:** While the Azure AD Connect wizard is crucial for initial setup and configuration, it doesn't provide the granular control needed to filter users based on specific attribute values like the Synchronization Rules Editor. Using the wizard for this task would require significant manual effort. * **D. Active Directory Users and Computers:** This tool manages on-premises Active Directory users and computers. Modifying users here wouldn't prevent synchronization; the user would still be synced unless specifically excluded by a synchronization rule. **Note:** The provided discussion shows unanimous agreement on the correct answer (A).
149
**** [View Question](https://www.examtopics.com/discussions/databricks/view/18946-exam-az-500-topic-2-question-2-discussion/) You have a hybrid configuration of Azure Active Directory (Azure AD). You have an Azure HDInsight cluster on a virtual network. You plan to allow users to authenticate to the cluster by using their on-premises Active Directory credentials. You need to configure the environment to support the planned authentication. Solution: You deploy the On-premises data gateway to the on-premises network. Does this meet the goal? A. Yes B. No **
** B. No **Explanation:** Deploying the On-premises data gateway does *not* allow users to authenticate to an Azure HDInsight cluster using their on-premises Active Directory credentials. The on-premises data gateway is primarily used for connecting on-premises data sources to cloud services like Power BI, not for user authentication to HDInsight. To achieve the goal, you would need to configure Azure Active Directory Domain Services (Azure AD DS) or utilize a different authentication mechanism like Kerberos or LDAP integration, depending on the specific HDInsight cluster configuration. The discussion clearly indicates that the suggested answer (B) is correct. **Why other options are incorrect:** * **A. Yes:** This is incorrect. The on-premises data gateway is not the appropriate solution for authenticating users to an HDInsight cluster with on-premises Active Directory credentials.
150
[View Question](https://www.examtopics.com/discussions/databricks/view/18948-exam-az-500-topic-5-question-6-discussion/) You have the Azure Information Protection labels as shown in the following table. [Image of table showing Azure Information Protection labels] You have the Azure Information Protection policies as shown in the following table. [Image of table showing Azure Information Protection policies] You need to identify how Azure Information Protection will label files. What should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. [Image of Hot Area]
Box 1: Label 2 only. Box 2: No Label. Azure Information Protection applies labels based on the order defined in the policy. The label with the highest position (last in the policy) is applied, even if multiple labels' conditions are met. In this specific scenario, Label 2 is positioned last and therefore is the most sensitive; thus, it will be applied. Automatic classification only works with Word, Excel, PowerPoint, and Outlook; therefore, a file saved using Microsoft Notepad will receive no label. The provided discussion confirms that the most sensitive label (the last one in the policy order) is applied. The discussion also clarifies the limitations of automatic classification, excluding Notepad. Other options are incorrect because they would not correctly reflect the behavior of Azure Information Protection's label application based on the order of labels in the policy and the limitations of automatic classification.
151
**** [View Question](https://www.examtopics.com/discussions/databricks/view/19634-exam-az-500-topic-3-question-41-discussion/) You have an Azure subscription. The subscription contains 50 virtual machines that run Windows Server 2012 R2 or Windows Server 2016. You need to deploy Microsoft Antimalware to the virtual machines. Solution: You connect to each virtual machine and add a Windows feature. Does this meet the goal? A. Yes B. No **
** B. No **Explanation:** The suggested answer is "No" because manually adding a Windows feature to each of the 50 virtual machines is not an efficient or scalable solution for deploying Microsoft Antimalware. While Microsoft Antimalware *might* be added as a Windows feature (the discussion highlights that it's often pre-installed now), this approach is highly impractical for managing a large number of VMs. A more suitable approach would involve using automation tools or centralized management systems for consistent and efficient deployment across all VMs. The discussion points out that this approach isn't appropriate for Desired State Configuration (DSC) extensions, suggesting a more automated solution is expected. **Why other options are incorrect:** * **A. Yes:** This is incorrect because manually adding the feature to each VM is inefficient and not considered a best practice for managing a large-scale deployment. It does not scale well and is prone to human error. **Note:** The discussion shows some disagreement on whether adding Antimalware as a Windows feature is even possible or relevant in modern environments, as the commenter schpeter_091 indicates it is often installed by default. However, the consensus from the discussion leans towards "No" because the method described is not a scalable or efficient deployment strategy.
152
**** [View Question](https://www.examtopics.com/discussions/databricks/view/19695-exam-az-500-topic-2-question-17-discussion/) Your network contains an Active Directory forest named contoso.com. You have an Azure Active Directory (Azure AD) tenant named contoso.com. You plan to configure synchronization by using the Express Settings installation option in Azure AD Connect. You need to identify which roles and groups are required to perform the planned configuration. The solution must use the principle of least privilege. Which two roles and groups should you identify? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. the Domain Admins group in Active Directory B. the Security administrator role in Azure AD C. the Global administrator role in Azure AD D. the User administrator role in Azure AD E. the Enterprise Admins group in Active Directory **
** C and E. To configure Azure AD Connect using Express Settings, you need: * **C. the Global administrator role in Azure AD:** This role provides the necessary permissions within Azure AD to configure the synchronization. * **E. the Enterprise Admins group in Active Directory:** This group provides the necessary permissions within your on-premises Active Directory to allow Azure AD Connect to access and synchronize your on-premises Active Directory. Options A, B, and D are incorrect because they do not provide the necessary level of access required for Azure AD Connect Express Settings. While Domain Admins have significant privileges, Enterprise Admins provide the *minimum* necessary privileges consistent with the principle of least privilege. Security and User administrators in Azure AD do not have the authority to perform the synchronization configuration. **Note:** While there is some discussion regarding the necessity of Domain Admins, the consensus and the suggested answer align with Enterprise Admins as the correct on-premises Active Directory group. The principle of least privilege supports the selection of Enterprise Admins over Domain Admins for the on-premises side.
153
[View Question](https://www.examtopics.com/discussions/databricks/view/20418-exam-az-500-topic-3-question-45-discussion/) You are configuring and securing a network environment. You deploy an Azure virtual machine named VM1 that is configured to analyze network traffic. You need to ensure that all network traffic is routed through VM1. What should you configure? A. a system route B. a network security group (NSG) C. a user-defined route
C. a user-defined route (UDR). A User Defined Route (UDR) allows you to specify custom routing rules for the virtual network. By creating a UDR that directs all traffic destined for the internet (or other networks) to VM1's IP address, all network traffic will be forced to pass through it for analysis before continuing to its intended destination. Why other options are incorrect: * **A. a system route:** System routes are managed by Azure and are generally used for routing traffic within the Azure infrastructure itself. They wouldn't provide the level of control needed to specifically route all traffic through VM1. * **B. a network security group (NSG):** NSGs control network traffic *access* to resources within a subnet, not the routing path of network traffic. While you can use NSGs to allow or deny traffic to VM1, they don't force *all* traffic to be routed through it. Note: The discussion shows overwhelming agreement on the correct answer being C. There's no significant disagreement expressed in the provided text.
154
[View Question](https://www.examtopics.com/discussions/databricks/view/20544-exam-az-500-topic-4-question-50-discussion/) DRAG DROP - You have an Azure subscription that contains 100 virtual machines. Azure Diagnostics is enabled on all the virtual machines. You are planning the monitoring of Azure services in the subscription. You need to retrieve the following details: ✑ Identify the user who deleted a virtual machine three weeks ago. ✑ Query the security events of a virtual machine that runs Windows Server 2016. What should you use in Azure Monitor? To answer, drag the appropriate configuration settings to the correct details. Each configuration setting may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0039500003.jpg)
1. **Identify the user who deleted a virtual machine three weeks ago:** Use **Activity Logs**. Azure Activity Logs track control-plane events, including resource creation and deletion, providing information on "what, who, and when" for these operations. This allows you to identify the user who performed the deletion three weeks prior. 2. **Query the security events of a virtual machine that runs Windows Server 2016:** Use **Logs**. The "Logs" option (referencing Log Integration) collects Azure diagnostics from Windows VMs, including security events. This provides a centralized location to query these security logs. Why other options are incorrect: There are no other options presented in the question beyond Activity Logs and Logs, making these the only choices available. The question explicitly asks what to use *within* Azure Monitor for these specific tasks. Note: The discussion section overwhelmingly supports this answer, with multiple users confirming its correctness. There is no evidence of disagreement or conflicting opinions regarding the solution.
155
[View Question](https://www.examtopics.com/discussions/databricks/view/21346-exam-az-500-topic-9-question-1-discussion/) You need to ensure that User2 can implement PIM. What should you do first? A. Assign User2 the Global administrator role. B. Configure authentication methods for contoso.com. C. Configure the identity secure score for contoso.com. D. Enable multi-factor authentication (MFA) for User2.
A. Assign User2 the Global administrator role. The discussion overwhelmingly supports option A. To implement Privileged Identity Management (PIM), a user typically needs Global Administrator permissions. This is because PIM involves managing access to sensitive roles and permissions, a task typically reserved for users with the highest level of administrative privileges. The other options are not sufficient for PIM implementation, as they address different aspects of security but do not provide the necessary elevated administrative access. Why other options are incorrect: * **B. Configure authentication methods for contoso.com:** This is a necessary security practice but does not directly enable PIM. PIM is about managing *who* has elevated access, not how they authenticate. * **C. Configure the identity secure score for contoso.com:** This improves overall security posture but doesn't grant the necessary permissions for User2 to manage PIM. * **D. Enable multi-factor authentication (MFA) for User2:** While MFA is a good security practice, it doesn't provide the administrative rights required to implement PIM. MFA enhances security but does not provide the necessary administrative role. Note: While the discussion strongly favors option A, it is crucial to remember that the specifics of PIM implementation might vary depending on the organization's configuration and policies. However, based on the information provided in the discussion, option A is the most likely and widely accepted answer.
156
[View Question](https://www.examtopics.com/discussions/databricks/view/21416-exam-az-500-topic-3-question-35-discussion/) You have Azure Resource Manager templates that you use to deploy Azure virtual machines. You need to disable unused Windows features automatically as instances of the virtual machines are provisioned. What should you use? A. device compliance policies in Microsoft Intune B. Azure Automation State Configuration C. application security groups D. Azure Advisor
The correct answer is B. Azure Automation State Configuration. Azure Automation State Configuration (formerly known as Azure Desired State Configuration or DSC) allows you to manage the configuration of your Azure VMs and ensure they are in the desired state. It can be used to automate tasks like disabling unused Windows features during provisioning. Option A, device compliance policies in Microsoft Intune, focuses on managing the security and compliance of devices, not automatically configuring Windows features during VM provisioning. Option C, application security groups, manages network security, not VM configuration. Option D, Azure Advisor, provides recommendations for optimizing your Azure environment, but doesn't directly manage VM configurations. Note: The discussion overwhelmingly supports answer B. There is no evidence of significant disagreement within the provided text.
157
[View Question](https://www.examtopics.com/discussions/databricks/view/21807-exam-az-500-topic-2-question-15-discussion/) You have an Azure subscription named Sub1. You have an Azure Storage account named sa1 in a resource group named RG1. Users and applications access the blob service and the file service in sa1 by using several shared access signatures (SASs) and stored access policies. You discover that unauthorized users accessed both the file service and the blob service. You need to revoke all access to sa1. Solution: You create a lock on sa1. Does this meet the goal? A. Yes B. No
B. No. Creating a lock on the Azure Storage account (sa1) does not revoke access to the blob service and file service. Locks prevent accidental deletion or modification of resources, not access control. To revoke access, you must invalidate or delete the shared access signatures (SASs) and stored access policies granting access to unauthorized users. Why other options are incorrect: A. is incorrect because creating a lock only prevents modifications or deletions of the storage account itself, not access to its resources. The unauthorized access continues even with the lock in place. Note: The discussion shows overwhelming agreement that the correct answer is B.
158
[View Question](https://www.examtopics.com/discussions/databricks/view/21847-exam-az-500-topic-3-question-48-discussion/) You have 15 Azure virtual machines in a resource group named RG1. All the virtual machines run identical applications. You need to prevent unauthorized applications and malware from running on the virtual machines. What should you do? A. Apply an Azure policy to RG1. B. From Azure Security Center, configure adaptive application controls. C. Configure Azure Active Directory (Azure AD) Identity Protection. D. Apply a resource lock to RG1.
The correct answer is **B. From Azure Security Center, configure adaptive application controls.** Adaptive Application Controls, a feature within Microsoft Defender for Cloud, is designed to prevent unauthorized applications and malware from running on virtual machines. It uses machine learning to establish a baseline of safe applications, automatically whitelists these, and alerts on any deviations. This directly addresses the problem described in the question. Why other options are incorrect: * **A. Apply an Azure policy to RG1:** Azure policies manage resource configurations, but they don't directly prevent malware or unauthorized application execution. They could be used *in conjunction* with other security measures, but alone they are insufficient. * **C. Configure Azure Active Directory (Azure AD) Identity Protection:** Azure AD Identity Protection focuses on identity and access management, not directly on preventing malware or unauthorized applications on VMs. * **D. Apply a resource lock to RG1:** Resource locks prevent unintended modifications to resources but don't address the security threat of malware or unauthorized applications. Note: The discussion shows unanimous agreement that option B is the correct answer.
159
[View Question](https://www.examtopics.com/discussions/databricks/view/21854-exam-az-500-topic-5-question-29-discussion/) You have an Azure subscription named Sub1 that contains the Azure key vaults shown in the following table: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0049900001.png) In Sub1, you create a virtual machine that has the following configurations: ✑ Name: VM1 ✑ Size: DS2v2 ✑ Resource group: RG1 ✑ Region: West Europe ✑ Operating system: Windows Server 2016 You plan to enable Azure Disk Encryption on VM1. In which key vaults can you store the encryption key for VM1? A. Vault1 or Vault3 only B. Vault1, Vault2, Vault3, or Vault4 C. Vault1 only D. Vault1 or Vault2 only
A. Vault1 or Vault3 only Explanation: Azure Disk Encryption requires the Key Vault to be in the same region as the virtual machine. VM1 is located in West Europe. Examining the provided image, only Vault1 and Vault3 are located in West Europe. Therefore, only these two key vaults can be used to store the encryption key for VM1. Why other options are incorrect: * **B. Vault1, Vault2, Vault3, or Vault4:** This is incorrect because Vault2 and Vault4 are not in the West Europe region. * **C. Vault1 only:** This is incorrect because it excludes Vault3, which is also in the West Europe region and therefore a valid option. * **D. Vault1 or Vault2 only:** This is incorrect because Vault2 is not in the West Europe region. Note: The discussion shows a consensus on answer A. There is no significant disagreement in the provided text.
160
[View Question](https://www.examtopics.com/discussions/databricks/view/21857-exam-az-500-topic-13-question-4-discussion/) You need to meet the technical requirements for VNetwork1. What should you do first? A. Create a new subnet on VNetwork1. B. Remove the NSGs from Subnet11 and Subnet13. C. Associate an NSG to Subnet12. D. Configure DDoS protection for VNetwork1.
A. Create a new subnet on VNetwork1. The overwhelming consensus among users in the discussion is that creating a new subnet on VNetwork1 is the first step to meet unspecified technical requirements. The reasoning provided by several users points to the necessity of a subnet (specifically named "AzureFirewallSubnet") for Azure Firewall functionality, implying that the missing subnet is the primary obstacle. Why other options are incorrect: * **B. Remove the NSGs from Subnet11 and Subnet13:** Removing NSGs (Network Security Groups) might be a necessary step in some situations, but it's not the first logical step to address the unspecified technical requirements for an entire virtual network. Removing security measures before establishing basic infrastructure is generally unwise. * **C. Associate an NSG to Subnet12:** Similar to option B, associating an NSG is a secondary step after the fundamental network structure is in place. It's a security configuration, not a prerequisite for the network's foundational requirements. * **D. Configure DDoS protection for VNetwork1:** DDoS protection is important for security but is again a secondary consideration. Basic network infrastructure must exist before deploying advanced security features. Note: While the provided context doesn't detail the *specific* technical requirements for VNetwork1, the repeated and highly upvoted support for option A strongly suggests it's the correct approach based on the implied need for an Azure Firewall subnet. There is some discussion suggesting the context of the question was related to a specific scenario, but the overall conclusion remains that A is the best answer based on the provided information.
161
[View Question](https://www.examtopics.com/discussions/databricks/view/21861-exam-az-500-topic-5-question-35-discussion/) You have a web app named WebApp1. You create a web application firewall (WAF) policy named WAF1. You need to protect WebApp1 by using WAF1. What should you do first? A. Deploy an Azure Front Door. B. Add an extension to WebApp1. C. Deploy Azure Firewall.
A. Deploy an Azure Front Door. A Web Application Firewall (WAF) policy, like WAF1, needs a platform to integrate with to protect a web application. Azure Front Door, Azure Application Gateway, and Azure CDN (in preview at the time of the discussion) are all supported platforms for deploying a WAF. Deploying Azure Front Door first provides the necessary infrastructure to then associate the WAF policy with WebApp1. Why other options are incorrect: * **B. Add an extension to WebApp1:** While a WAF might *interact* with a web app via extensions or configurations, it is not the *first* step. The WAF itself must first be deployed and configured onto a suitable platform (like Azure Front Door) before integrating it with the web app. * **C. Deploy Azure Firewall:** Azure Firewall is a network-level firewall, designed to protect networks as a whole. It is not the appropriate technology for application-specific protection at the web application level provided by a WAF. Note: The discussion indicates that Azure Application Gateway is also a valid platform for deploying a WAF, but the question asks for the *first* step, and deploying a platform like Azure Front Door is a prerequisite before associating the WAF policy. There is a consensus that option A is the best answer based on the provided information and context.
162
**** [View Question](https://www.examtopics.com/discussions/databricks/view/22588-exam-az-500-topic-4-question-52-discussion/) You have an Azure subscription named Sub1 that contains an Azure Log Analytics workspace named LAW1. You have 500 Azure virtual machines that run Windows Server 2016 and are enrolled in LAW1. You plan to add the System Update Assessment solution to LAW1. You need to ensure that System Update Assessment-related logs are uploaded to LAW1 from 100 of the virtual machines only. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0039900001.png) *(Image contains a drag-and-drop interface with options: Create a computer group; Create a scope configuration; Apply the scope configuration to a solution; Add the solution to the Log Analytics workspace)* **
** The correct sequence of actions is: 1. **Create a computer group:** This groups the 100 desired VMs together for targeted management. 2. **Create a scope configuration:** This defines the specific computer group (created in step 1) to which the System Update Assessment solution should be applied. 3. **Apply the scope configuration to a solution:** This links the scope configuration (created in step 2) to the System Update Assessment solution, restricting its data collection to only the VMs within the specified computer group. **Why other options are incorrect (or the order is incorrect):** Adding the solution to the Log Analytics workspace before defining a scope would apply the solution to *all* VMs, not just the selected 100. The order is crucial for targeted application. The provided image options only allow these three steps in some order. **Note:** The discussion indicates that this question might be outdated because the described feature is deprecated. Several commenters point to this, and one suggests an updated link to Microsoft Learn documentation. The provided solution reflects the steps as presented in the original question; however, current best practices might differ.
163
**** [View Question](https://www.examtopics.com/discussions/databricks/view/23013-exam-az-500-topic-3-question-1-discussion/) You have an Azure subscription that contains a user named User1 and an Azure Container Registry named ContReg1. You enable content trust for ContReg1. You need to ensure that User1 can create trusted images in ContReg1. The solution must use the principle of least privilege. Which two roles should you assign to User1? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. AcrQuarantineReader B. Contributor C. AcrPush D. AcrImageSigner E. AcrQuarantineWriter **
** The correct answer is **C and D: AcrPush and AcrImageSigner**. To create trusted images in a container registry with content trust enabled, a user needs the ability to both push images (AcrPush) and sign those images (AcrImageSigner). The principle of least privilege dictates assigning only the necessary roles. While a Contributor role would grant broader access, it's not the least privileged approach for this specific task. AcrQuarantineReader and AcrQuarantineWriter roles relate to image quarantine management and are not required for creating trusted images. **Why other options are incorrect:** * **A. AcrQuarantineReader:** This role only allows reading quarantined images, not creating or signing them. * **B. Contributor:** This role provides excessive permissions, violating the principle of least privilege. It grants far more access than needed to simply create trusted images. * **E. AcrQuarantineWriter:** This role manages quarantined images, which is not relevant to creating trusted images. **Note:** The discussion shows disagreement on the correct answer. Some users suggest `AcrPush` and `Contributor`, while the most upvoted response and suggested answer favor `AcrPush` and `AcrImageSigner`. The explanation above justifies the latter based on the principle of least privilege and the functionalities of each role. The provided Microsoft documentation links in the discussion are helpful but don't definitively resolve the ambiguity.
164
[View Question](https://www.examtopics.com/discussions/databricks/view/23194-exam-az-500-topic-4-question-53-discussion/) You have an Azure subscription named Sub1 that contains the virtual machines shown in the following table. | Resource Group | VM Name | Operating System | |---|---|---| | RG1 | VM1 | Windows Server 2019 | | RG1 | VM2 | Windows Server 2019 | | RG2 | VM3 | Windows Server 2016 | | RG2 | VM4 | Windows Server 2016 | You need to ensure that the virtual machines in RG1 have the Remote Desktop port closed until an authorized user requests access. What should you configure? A. Azure Active Directory (Azure AD) Privileged Identity Management (PIM) B. an application security group C. Azure Active Directory (Azure AD) conditional access D. just in time (JIT) VM access
D. Just in time (JIT) VM access JIT VM access is the correct answer because it allows for the temporary opening of ports, such as the Remote Desktop port, only when an authorized user requests access. This directly addresses the requirement of keeping the Remote Desktop port closed until access is explicitly requested. Why other options are incorrect: * **A. Azure Active Directory (Azure AD) Privileged Identity Management (PIM):** PIM manages privileged accounts and roles, but it doesn't directly control the opening and closing of ports on VMs. * **B. An application security group:** Application security groups manage network traffic based on application and virtual machine tags but do not provide the on-demand access control required. * **C. Azure Active Directory (Azure AD) conditional access:** Conditional Access controls access to applications and resources based on various conditions, but it doesn't directly manage port access on VMs in the way required. Note: The discussion section shows overwhelming agreement on answer D.
165
[View Question](https://www.examtopics.com/discussions/databricks/view/23206-exam-az-500-topic-5-question-7-discussion/) Your company uses Azure DevOps. You need to recommend a method to validate whether the code meets the company's quality standards and code review standards. What should you recommend implementing in Azure DevOps? A. branch folders B. branch permissions C. branch policies D. branch locking
C. branch policies Branch policies in Azure DevOps allow you to enforce requirements before code is merged into a specific branch, such as a main or release branch. This includes things like requiring code reviews, builds, and tests to pass before a merge is allowed. This directly addresses the need to validate code against quality and review standards. Why other options are incorrect: * **A. branch folders:** Folders don't inherently enforce quality or review standards. They are simply for organizing code. * **B. branch permissions:** Permissions control *who* can access and modify branches, not *what* quality standards the code must meet before merging. * **D. branch locking:** Locking prevents changes entirely. This is too restrictive; it doesn't allow for validation and controlled merging. Note: The discussion thread shows significant disagreement about the relevance of this question to the AZ-500 exam. Many commenters believe it is an incorrect or misplaced question for that exam.
166
[View Question](https://www.examtopics.com/discussions/databricks/view/23508-exam-az-500-topic-4-question-36-discussion/) HOTSPOT - You suspect that users are attempting to sign in to resources to which they have no access. You need to create an Azure Log Analytics query to identify failed user sign-in attempts from the last three days. The results must only show users who had more than five failed sign-in attempts. How should you configure the query? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0037900001.png) *(This image contains a code editor for writing the Log Analytics query. The specific content of the image is not provided but is implied by the question.)* Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0038000001.png) *(This image shows a completed Log Analytics query. The specific content of the image is not provided but is implied by the question and answer.)*
The correct answer requires using `EventID` and `count()` within the Azure Log Analytics query. The query should filter for `EventID 4625` (which represents failed logon attempts), group results by user account (`by Account`), count the failed login attempts (`count()`), and then filter to only show accounts with more than five failed attempts (`where failed_login_attempts > 5`). The time range should be adjusted to the last three days (`ago(3d)` instead of `ago(1d)` shown in the example). The provided suggested answer uses `ago(1d)`, which is incorrect for this question's requirement of three days. The correct query would look something like this (the exact syntax might vary slightly depending on the Azure Log Analytics version): ``` SecurityEvent | where TimeGenerated > ago(3d) | where AccountType == 'User' and EventID == 4625 | summarize failed_login_attempts=count() by Account | where failed_login_attempts > 5 ``` The discussion overwhelmingly supports this answer, with multiple users stating it's correct and referencing `EventID` and `count()`. Why other options are incorrect: The question specifically asks for a query that identifies failed login attempts. Omitting `EventID` would result in a query that doesn't specifically target failed login events. Omitting `count()` would prevent the aggregation of failed attempts per user. Using a timeframe less than three days would also yield an incorrect result. The specific elements within the "Hot Area" image are not provided in this context to confirm other potential incorrect options.
167
**** [View Question](https://www.examtopics.com/discussions/databricks/view/24497-exam-az-500-topic-2-question-19-discussion/) You have a hybrid configuration of Azure Active Directory (Azure AD). You have an Azure HDInsight cluster on a virtual network. You plan to allow users to authenticate to the cluster by using their on-premises Active Directory credentials. You need to configure the environment to support the planned authentication. Solution: You deploy an Azure AD Application Proxy. Does this meet the goal? A. Yes B. No **
** B. No Deploying Azure AD Application Proxy does not meet the goal. Azure AD Application Proxy is designed to provide secure access to on-premises web applications for external users, not for authenticating users to an Azure HDInsight cluster using on-premises Active Directory credentials. The proxy facilitates single sign-on (SSO) for web apps, but it doesn't bridge the authentication mechanism required between HDInsight and your on-premises AD. Other solutions, such as configuring Azure AD Connect with pass-through authentication or federation, would be necessary to achieve the stated goal. **Why other options are incorrect:** A. Yes is incorrect because Azure AD Application Proxy is not the appropriate solution for this scenario. It addresses a different problem (external access to on-premises apps), not the integration of on-premises AD with an Azure HDInsight cluster for authentication. The discussion overwhelmingly agrees on answer B.
168
[View Question](https://www.examtopics.com/discussions/databricks/view/25088-exam-az-500-topic-4-question-4-discussion/) Your company has an Azure Active Directory (Azure AD) tenant named contoso.com. You plan to create several security alerts by using Azure Monitor. You need to prepare the Azure subscription for the alerts. What should you create first? A. an Azure Storage account B. an Azure Log Analytics workspace C. an Azure event hub D. an Azure Automation account
B. an Azure Log Analytics workspace Azure Log Analytics workspaces are the central repository for storing logs collected by Azure Monitor. Security alerts in Azure Monitor rely on these logs for analysis and triggering alerts. Therefore, you must create a Log Analytics workspace before you can configure and receive security alerts. The workspace is where the log data that triggers the alerts will be stored and processed. Why other options are incorrect: * **A. an Azure Storage account:** While Azure Storage can store data, it's not specifically designed for the log analysis and querying required for Azure Monitor security alerts. Log Analytics workspaces provide the necessary tools and integrations for this purpose. * **C. an Azure event hub:** Event Hubs are used for ingesting high-volume streaming data. While Azure Monitor can ingest data from Event Hubs, it's not the primary or necessary component for configuring security alerts. A Log Analytics workspace is still needed to process and analyze the data. * **D. an Azure Automation account:** Azure Automation is used for automating tasks. While it can be used in conjunction with Azure Monitor for certain security operations, it's not the fundamental prerequisite for creating security alerts. The data processing and analysis still require a Log Analytics workspace. Note: The discussion section strongly supports option B as the correct answer, with multiple users confirming its correctness based on their exam experiences and referencing Microsoft documentation.
169
[View Question](https://www.examtopics.com/discussions/databricks/view/25226-exam-az-500-topic-5-question-9-discussion/) You have an Azure web app named WebApp1. You upload a certificate to WebApp1. You need to make the certificate accessible to the app code of WebApp1. What should you do? A. Add a user-assigned managed identity to WebApp1. B. Add an app setting to the WebApp1 configuration. C. Enable system-assigned managed identity for WebApp1. D. Configure the TLS/SSL binding for WebApp1.
The correct answer is **B. Add an app setting to the WebApp1 configuration.** To make the uploaded certificate accessible to the WebApp1 app code, you need to add its thumbprint to the `WEBSITE_LOAD_CERTIFICATES` app setting. This allows the application to load and use the certificate. The Azure CLI command provided in the discussion (`az webapp config appsettings set --name --resource-group --settings WEBSITE_LOAD_CERTIFICATES=`) further supports this. Why other options are incorrect: * **A. Add a user-assigned managed identity to WebApp1:** Managed identities are used for authentication and authorization to access other Azure resources, not for directly accessing certificates within the application code itself. * **C. Enable system-assigned managed identity for WebApp1:** Similar to option A, this is for authentication and authorization to other resources, not for accessing certificates within the application code. * **D. Configure the TLS/SSL binding for WebApp1:** This configures the web app to use the certificate for secure communication with clients (HTTPS), not for internal access within the application code. Note: The discussion shows a strong consensus supporting option B as the correct answer.
170
**** [View Question](https://www.examtopics.com/discussions/databricks/view/25755-exam-az-500-topic-3-question-3-discussion/) You need to configure Azure to allow RDP connections from the Internet to a virtual machine named VM1. The solution must minimize the attack surface of VM1. To complete this task, sign in to the Azure portal. **
** The optimal solution involves using a combination of techniques to minimize the attack surface while enabling RDP access to VM1 from the internet. The most secure approach mentioned in the discussion is to deploy VM1 in a private subnet behind an Azure Firewall in another subnet, accessed via a NAT gateway. This approach leverages the Azure Firewall's capabilities to filter and restrict incoming traffic, minimizing the exposed attack surface compared to simply opening port 3389 directly on VM1's NSG. While configuring an NSG rule to allow RDP from "Service Tag - Internet" instead of "Any" is an improvement over the basic NSG approach, the firewall solution is more robust. Using a Bastion Host with Just-In-Time (JIT) VM access is also mentioned as a strong contender, further restricting access only when needed. **Why other options are incorrect (or less optimal):** * **Opening port 3389 directly on VM1's NSG with "Source: Any":** This exposes VM1 to the entire internet, maximizing the attack surface. While a specific rule is better than an open port, it's still quite exposed. * **Opening port 3389 on VM1's NSG with "Source: Service Tag - Internet":** This is better than "Any," restricting access to only internet-facing IP addresses. However, it still exposes VM1 directly to the internet, although more specifically. The discussion shows a disagreement among users about the best approach. Some suggest using a Bastion host with JIT access, while others prefer using an Azure Firewall. Both are strong contenders, but the Azure Firewall solution, when properly configured, offers finer-grained control and potentially better security posture in this case.
171
**** [View Question](https://www.examtopics.com/discussions/databricks/view/26001-exam-az-500-topic-4-question-3-discussion/) You have an Azure Storage account named storage1 that has a container named container1. You need to prevent the blobs in container1 from being modified. What should you do? A. From container1, change the access level. B. From container1, add an access policy. C. From container1, modify the Access Control (IAM) settings. D. From storage1, enable soft delete for blobs. **
** B. From container1, add an access policy. The correct answer is B because while the wording is slightly ambiguous, within the context of Azure Blob storage, "adding an access policy" at the container level refers to configuring an immutability policy. This policy prevents blobs within the container from being modified or deleted after they are uploaded, fulfilling the requirement of preventing modification. The discussion highlights some confusion around the terminology, with some users noting that "access policy" typically refers to SAS tokens. However, within the context of the question and Azure's functionality, the most appropriate interpretation is that it refers to the immutability policy setting. **Why other options are incorrect:** * **A. From container1, change the access level:** Changing the access level only controls who can access the blobs, not whether they can be modified once accessed. * **C. From container1, modify the Access Control (IAM) settings:** IAM settings manage access permissions (who can access), not the immutability of blobs themselves. * **D. From storage1, enable soft delete for blobs:** Soft delete protects against accidental deletion, but it doesn't prevent modification of existing blobs. **Note:** The discussion reveals some disagreement about the clarity of the term "access policy" in this context. Some participants point out the common usage of this term for SAS tokens. However, the consensus leans toward the interpretation that the question implies setting an immutability policy within the container's settings.
172
**** [View Question](https://www.examtopics.com/discussions/databricks/view/26002-exam-az-500-topic-4-question-5-discussion/) Your company has an Azure subscription named Sub1. Sub1 contains an Azure web app named WebApp1 that uses Azure Application Insights. WebApp1 requires users to authenticate by using OAuth 2.0 client secrets. Developers at the company plan to create a multi-step web test app that performs synthetic transactions emulating user traffic to Web App1. You need to ensure that web tests can run unattended. What should you do first? A. In Microsoft Visual Studio, modify the .webtest file. B. Upload the .webtest file to Application Insights. C. Register the web test app in Azure AD. D. Add a plug-in to the web test app. **
** C. Register the web test app in Azure AD. **Explanation:** Because WebApp1 uses OAuth 2.0 authentication with client secrets, the web test app needs to be authorized to access it. Registering the web test app in Azure Active Directory (Azure AD) is the first step to obtain the necessary credentials (client ID and client secret) to perform OAuth 2.0 authentication on behalf of the web test app. Without this registration, the web test app cannot obtain the access token required to interact with WebApp1. **Why other options are incorrect:** * **A. In Microsoft Visual Studio, modify the .webtest file:** Modifying the web test file is a later step. The file will contain the authentication details *after* the app is registered in Azure AD. * **B. Upload the .webtest file to Application Insights:** Uploading the web test file is done *after* the authentication mechanism is configured. The file needs to contain the correctly configured authentication details first. * **D. Add a plug-in to the web test app:** Adding a plug-in might be necessary for specific functionalities within the test, but it's not the first step; authentication needs to be established beforehand. **Note:** The discussion shows some disagreement on the correct answer, with some users suggesting B as the correct answer. However, the prevailing and most technically sound argument emphasizes the necessity of Azure AD registration first to enable OAuth 2.0 authentication, making C the most accurate answer.
173
[View Question](https://www.examtopics.com/discussions/databricks/view/26003-exam-az-500-topic-2-question-34-discussion/) You have an Azure subscription. You configure the subscription to use a different Azure Active Directory (Azure AD) tenant. What are two possible effects of the change? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Role assignments at the subscription level are lost. B. Virtual machine managed identities are lost. C. Virtual machine disk snapshots are lost. D. Existing Azure resources are deleted.
The correct answers are A and B. A. **Role assignments at the subscription level are lost:** Changing the Azure AD tenant associated with a subscription breaks existing role-based access control (RBAC) assignments. Users previously assigned roles at the subscription level will lose access. This is explicitly stated in the provided discussion: "Users that have been assigned roles using RBAC will lose their access". B. **Virtual machine managed identities are lost:** Managed identities for virtual machines are tied to the Azure AD tenant. Switching tenants renders these identities unusable, requiring recreation or re-enablement. The discussion confirms this: "If you have any managed identities for resources such as Virtual Machines or Logic Apps, you must re-enable or recreate them after the association". C. **Virtual machine disk snapshots are lost:** This is incorrect. Disk snapshots are stored independently of the Azure AD tenant. Changing tenants does not affect them. D. **Existing Azure resources are deleted:** This is incorrect. Changing the Azure AD tenant does not automatically delete existing Azure resources. While access might be disrupted (as with RBAC and managed identities), the resources themselves remain. Note: The provided discussion supports the selected answers. There's no conflicting information presented.
174
[View Question](https://www.examtopics.com/discussions/databricks/view/26100-exam-az-500-topic-3-question-57-discussion/) You have an Azure Kubernetes Service (AKS) cluster that will connect to an Azure Container Registry. You need to use the automatically generated service principal for the AKS cluster to authenticate to the Azure Container Registry. What should you create? A. a secret in Azure Key Vault B. a role assignment C. an Azure Active Directory (Azure AD) user D. an Azure Active Directory (Azure AD) group
B. a role assignment The correct answer is B because to allow the AKS cluster's automatically generated service principal (or system-assigned managed identity, as noted in the discussion) to access the Azure Container Registry, you need to grant it the necessary permissions. This is achieved through a role assignment. The role assignment grants the service principal the `AcrPull` role (or a similar role with appropriate permissions) allowing it to pull images from the registry. Why other options are incorrect: * **A. a secret in Azure Key Vault:** While Azure Key Vault is used for storing secrets, it doesn't directly grant access permissions. The service principal needs permissions, not just credentials. * **C. an Azure Active Directory (Azure AD) user:** Creating a new Azure AD user is unnecessary. The AKS cluster already has a system-assigned managed identity or a service principal. * **D. an Azure Active Directory (Azure AD) group:** Similar to option C, creating a new group is unnecessary; the existing service principal needs the permission, not a new group. Note: The discussion highlights that the question might be outdated. Modern AKS clusters often use system-assigned managed identities instead of explicitly creating service principals. The core concept remains the same: you need to grant permissions (through a role assignment) to the identity used by the AKS cluster to access the Azure Container Registry.
175
[View Question](https://www.examtopics.com/discussions/databricks/view/26102-exam-az-500-topic-2-question-36-discussion/) You have an Azure subscription that contains virtual machines. You enable just in time (JIT) VM access to all the virtual machines. You need to connect to a virtual machine by using Remote Desktop. What should you do first? A. From Azure Directory (Azure AD) Privileged Identity Management (PIM), activate the Security administrator user role. B. From Azure Active Directory (Azure AD) Privileged Identity Management (PIM), activate the Owner role for the virtual machine. C. From the Azure portal, select the virtual machine, select Connect, and then select Request access. D. From the Azure portal, select the virtual machine and add the Network Watcher Agent virtual machine extension.
C. From the Azure portal, select the virtual machine, select Connect, and then select Request access. JIT (Just-in-Time) VM access requires a request to open the necessary ports for remote access. Option C accurately reflects this process: you initiate the connection request through the Azure portal, which then opens the required ports temporarily. The consensus among the discussion participants supports this answer. Why other options are incorrect: * **A. From Azure Directory (Azure AD) Privileged Identity Management (PIM), activate the Security administrator user role.** This is incorrect because activating a PIM role doesn't directly open ports for JIT access. PIM manages access control, not network connectivity. * **B. From Azure Active Directory (Azure AD) Privileged Identity Management (PIM), activate the Owner role for the virtual machine.** Similar to option A, this deals with administrative permissions, not the JIT access request process itself. * **D. From the Azure portal, select the virtual machine and add the Network Watcher Agent virtual machine extension.** The Network Watcher Agent is for monitoring network traffic and doesn't facilitate the JIT access request process. Adding this extension won't grant remote desktop access. Note: The discussion shows overwhelming agreement on the correct answer.
176
**** [View Question](https://www.examtopics.com/discussions/databricks/view/27189-exam-az-500-topic-5-question-46-discussion/) You need to ensure that connections through an Azure Application Gateway named Homepage-AGW are inspected for malicious requests. To complete this task, sign in to the Azure portal. You do not need to wait for the task to complete. What steps should be taken to accomplish this? **
** To ensure that connections through the Azure Application Gateway named Homepage-AGW are inspected for malicious requests, you need to enable the Web Application Firewall (WAF) on the Application Gateway and set it to Prevention mode. The exact steps may vary slightly based on the Azure portal version, as shown by the differing instructions provided by users. However, the core steps remain consistent: 1. **Access the Application Gateway:** Locate and select the Homepage-AGW Application Gateway within the Azure portal. Multiple methods exist to achieve this, including searching for "Application gateways" and directly selecting the gateway or using the "All services" option. 2. **Enable the Web Application Firewall:** Navigate to the Web Application Firewall settings within the Homepage-AGW configuration. 3. **Configure WAF Settings:** Select WAF v2 as the tier. Enable the Firewall and set the Firewall mode to "Prevention". This ensures that malicious traffic is blocked, rather than simply logged (Detection mode). 4. **Save Changes:** Save the changes made to the Application Gateway configuration. **Why other options are incorrect (or partially correct):** There's some disagreement in the provided discussion regarding the exact navigation steps and terminology (e.g., "WAF2" vs "WAF V2", different paths to find the Application Gateway). However, the core steps are always the same: enable the WAF and set it to Prevention mode. Some users' responses omit critical steps like setting the WAF tier to V2 and specifying the prevention mode. Therefore, while some aspects of the other users' responses are correct (like locating the Application Gateway and enabling WAF), they lack the completeness of the answer provided above. The key point is enabling the WAF in Prevention mode to block malicious traffic.
177
[View Question](https://www.examtopics.com/discussions/databricks/view/29209-exam-az-500-topic-3-question-4-discussion/) You need to add the network interface of a virtual machine named VM1 to an application security group named ASG1. To complete this task, sign in to the Azure portal. How do you add the network interface of VM1 to ASG1 using the Azure portal?
1. In the Azure portal's search bar, type the name of the virtual machine (VM1). 2. Select VM1 from the search results. 3. Under **SETTINGS**, select **Networking**. 4. Select **Application Security Groups**. 5. Select the checkbox for ASG1 (or select the application security group you want to add the network interface to). 6. Select **Save**. Only network interfaces within the same virtual network and location as the application security group can be added. The provided steps reflect the most upvoted and commonly accepted approach from the discussion. Note that there's minor variation in wording across different user responses within the provided discussion, but the core steps remain consistent.
178
[View Question](https://www.examtopics.com/discussions/databricks/view/29536-exam-az-500-topic-5-question-42-discussion/) SIMULATION - You need to configure a weekly backup of an Azure SQL database named Homepage. The backup must be retained for eight weeks. To complete this task, sign in to the Azure portal.
There are conflicting approaches described in the discussion regarding the exact steps to configure weekly backups with an eight-week retention policy for an Azure SQL database named "Homepage" within the Azure portal. One approach involves navigating to the server hosting the Homepage database, then accessing backup management, configuring policies, selecting weekly backups and setting retention to 8 weeks. Another approach suggests finding the database "Homepage" via search, then navigating through its overview to the server, managing backups, choosing the database, configuring retention, and setting long-term retention to 8 weeks. A third approach describes accessing SQL databases from the left-hand menu. The core steps are consistent: locate the server associated with the database, access backup management, set the backup frequency to weekly, and set the retention period to eight weeks. The discrepancy lies primarily in the specific navigation path within the Azure portal to reach the backup management settings. The different suggested paths highlight a potential ambiguity in the Azure portal's interface or changes over time. There is no single universally agreed-upon "correct" path, as different users describe successful methods.
179
[View Question](https://www.examtopics.com/discussions/databricks/view/30023-exam-az-500-topic-4-question-7-discussion/) You onboard Azure Sentinel. You connect Azure Sentinel to Azure Security Center. You need to automate the mitigation of incidents in Azure Sentinel. The solution must minimize administrative effort. What should you create? A. an alert rule B. a playbook C. a function app D. a runbook
B. a playbook A playbook in Azure Sentinel automates responses to security incidents. It can be configured to run automatically when specific alerts or incidents are triggered, minimizing manual intervention. This directly addresses the requirement to automate mitigation and minimize administrative effort. Why other options are incorrect: * **A. an alert rule:** Alert rules only detect and notify about incidents; they don't automate responses. * **C. a function app:** While function apps *can* be used for automation, they require significantly more setup and configuration than a purpose-built playbook within Azure Sentinel. It's not the most efficient solution for this specific scenario. * **D. a runbook:** Runbooks are associated with Azure Automation, not Azure Sentinel. While conceptually similar, using a runbook would add complexity and wouldn't leverage Sentinel's built-in automation capabilities. Note: The discussion overwhelmingly supports option B as the correct answer. However, there is some ambiguity on whether Azure Sentinel or Microsoft Sentinel is the correct terminology. The answer assumes the context of Azure Sentinel as presented in the question.
180
**** [View Question](https://www.examtopics.com/discussions/databricks/view/30025-exam-az-500-topic-2-question-39-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.onmicrosoft.com. The User administrator role is assigned to a user named Admin1. An external partner has a Microsoft account that uses the [email protected] sign in. Admin1 attempts to invite the external partner to sign in to the Azure AD tenant and receives the following error message: `Unable to invite user [email protected] Generic authorization exception.` You need to ensure that Admin1 can invite the external partner to sign in to the Azure AD tenant. What should you do? A. From the Roles and administrators blade, assign the Security administrator role to Admin1. B. From the Organizational relationships blade, add an identity provider. C. From the Custom domain names blade, add a custom domain. D. From the Users blade, modify the External collaboration settings. **
** D. From the Users blade, modify the External collaboration settings. The error message indicates a permission issue preventing Admin1 from inviting external users. While Admin1 has the User administrator role, this role doesn't automatically grant permission to invite external guests. The External collaboration settings control guest user invitations. Modifying these settings to allow guest invitations resolves the issue. The exact location of this setting, as noted in the discussion, may vary slightly (e.g., under "Azure Active Directory -> External Identities -> External collaboration settings" or "AAD -> Users -> User Settings -> "Manage External Collaboration Settings"). **Why other options are incorrect:** * **A. From the Roles and administrators blade, assign the Security administrator role to Admin1:** The Security administrator role provides broader administrative privileges, but it's not necessary for inviting guests. The User administrator role should already provide sufficient permissions *if* the external collaboration settings are properly configured. * **B. From the Organizational relationships blade, add an identity provider:** Adding an identity provider is relevant for federated identities, not for simply inviting guest users from other Microsoft accounts. * **C. From the Custom domain names blade, add a custom domain:** Adding a custom domain affects the tenant's branding and email addresses, but it's not directly related to the ability to invite guest users. **Note:** The discussion shows some variation in the exact location of the "External collaboration settings". This reflects the potential for slight UI changes in Azure AD over time. The core solution remains the same: enabling the appropriate setting to allow guest invitations.
181
[View Question](https://www.examtopics.com/discussions/databricks/view/30080-exam-az-500-topic-4-question-6-discussion/) You have an Azure subscription named Subscription1. You deploy a Linux virtual machine named VM1 to Subscription1. You need to monitor the metrics and the logs of VM1. What should you use? A. the AzurePerformanceDiagnostics extension B. Azure HDInsight C. Linux Diagnostic Extension (LAD) 3.0 D. Azure Analysis Services
C. Linux Diagnostic Extension (LAD) 3.0 The correct answer is C because the Linux Diagnostic Extension (LAD) 3.0 is specifically designed for monitoring metrics and logs on Linux VMs in Azure. While there's some disagreement in the discussion, the consensus among highly-voted comments points to C as the correct answer. The AzurePerformanceDiagnostics extension is for Windows VMs, not Linux VMs. Azure HDInsight is for big data analytics and Azure Analysis Services is for data analysis; neither is relevant to VM monitoring. Why other options are incorrect: * **A. the AzurePerformanceDiagnostics extension:** This extension is designed for Windows VMs, not Linux VMs. * **B. Azure HDInsight:** This is a Hadoop-based analytics service, not a VM monitoring tool. * **D. Azure Analysis Services:** This is a data warehousing and business intelligence service, not a VM monitoring tool. Note: There is some disagreement in the provided discussion regarding the suitability of the AzurePerformanceDiagnostics extension. However, the majority and highly-voted comments support the use of the Linux Diagnostic Extension for Linux VMs.
182
**** [View Question](https://www.examtopics.com/discussions/databricks/view/30291-exam-az-500-topic-2-question-4-discussion/) Your network contains an Active Directory forest named contoso.com. The forest contains a single domain. You have an Azure subscription named Sub1 that is associated to an Azure Active Directory (Azure AD) tenant named contoso.com. You plan to deploy Azure AD Connect and to integrate Active Directory and the Azure AD tenant. You need to recommend an integration solution that meets the following requirements: ✑ Ensures that password policies and user logon restrictions apply to user accounts that are synced to the tenant ✑ Minimizes the number of servers required for the solution. Which authentication method should you include in the recommendation? A. federated identity with Active Directory Federation Services (AD FS) B. password hash synchronization with seamless single sign-on (SSO) C. pass-through authentication with seamless single sign-on (SSO) **
** C. Pass-through authentication with seamless single sign-on (SSO) **Explanation:** The question emphasizes two key requirements: enforcing on-premises password policies and minimizing servers. Pass-through authentication (PTA) directly verifies user credentials against the on-premises Active Directory, ensuring that the on-premises password policies and logon restrictions are enforced. This method also minimizes the number of servers needed compared to federated identity (which requires AD FS servers). Password hash synchronization (PHS) doesn't directly enforce on-premises policies in the same way; it synchronizes password hashes, and inconsistencies might arise regarding password expiration and other restrictions. **Why other options are incorrect:** * **A. Federated identity with Active Directory Federation Services (AD FS):** While AD FS provides strong security, it introduces complexity and requires additional servers (the AD FS servers themselves), violating the requirement to minimize servers. * **B. Password hash synchronization with seamless single sign-on (SSO):** PHS synchronizes password hashes, but it doesn't guarantee immediate enforcement of on-premises password policies and logon restrictions. There can be a delay in policy updates being reflected in Azure AD. Users might be able to log in even if their accounts are locked or passwords expired on-premises. **Note:** The discussion shows a consensus supporting option C as the best solution, although some comments acknowledge potential confusion that might arise if the questions were phrased differently.
183
[View Question](https://www.examtopics.com/discussions/databricks/view/31217-exam-az-500-topic-4-question-8-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com. You need to configure diagnostic settings for contoso.com. The solution must meet the following requirements: ✑ Retain logs for two years. ✑ Query logs by using the Kusto query language. ✑ Minimize administrative effort. Where should you store the logs? A. an Azure event hub B. an Azure Log Analytics workspace C. an Azure Storage account
B. an Azure Log Analytics workspace Log Analytics workspaces are designed for storing and querying logs using the Kusto Query Language (KQL). They offer features that simplify log management and analysis, minimizing administrative effort. While Azure Storage accounts can store logs, they don't inherently support KQL querying. Azure Event Hubs are primarily for streaming data, not long-term storage and querying. The discussion shows widespread agreement on this answer, although there is a question raised about whether a Log Analytics Workspace can retain logs for two years. While the documentation doesn't explicitly state a two-year limit, in practice, it's achievable through proper configuration and data management practices (e.g. using retention policies). WHY OTHER OPTIONS ARE INCORRECT: A. Azure Event Hubs are designed for ingesting and processing high-volume streaming data. They are not optimized for long-term storage and querying with KQL. C. Azure Storage Accounts are general-purpose storage, suitable for various data types. However, querying logs in a storage account would require additional tools and processing, increasing administrative effort and not directly supporting KQL. They do not provide the built-in querying capabilities of a Log Analytics workspace.
184
**** [View Question](https://www.examtopics.com/discussions/databricks/view/36314-exam-az-500-topic-3-question-26-discussion/) SIMULATION - You need to deploy an Azure firewall to a virtual network named VNET3. To complete this task, sign in to the Azure portal and modify the Azure resources. This task might take several minutes to complete. You can perform other tasks while the task completes. What steps are required to deploy an Azure Firewall to VNET3? **
** To deploy an Azure Firewall to VNET3, you must first create a subnet named "AzureFirewallSubnet" within VNET3. This subnet should have an appropriate IP address range and *must be dedicated solely to the Azure Firewall*. No other resources should be placed within this subnet. Then, you add the Azure Firewall via the Azure portal, specifying the resource group (likely the same as VNET3), a name for the firewall, the same region as VNET3, and a public IP address (either an existing one or a new one created during this process). It's recommended that the `AzureFirewallSubnet` uses a /26 subnet size, although this isn't explicitly stated as a requirement in the provided text. **Why other options are incorrect:** The question is a simulation task; there isn't a multiple choice selection of incorrect options to compare against. The provided discussion highlights the criticality of creating the dedicated `AzureFirewallSubnet` and that it needs its own address space; failing to do so will prevent successful deployment. The discussion also notes the *recommendation* of a /26 subnet size, but doesn't state it as a strict requirement for successful deployment in this particular scenario. **Note:** There's a minor disagreement in the discussion about the necessity of a /26 subnet size for the AzureFirewallSubnet. The Microsoft documentation referenced supports the recommendation of a /26 subnet size, while the core steps for successful deployment don't explicitly mandate it in this specific question.
185
[View Question](https://www.examtopics.com/discussions/databricks/view/36651-exam-az-500-topic-3-question-25-discussion/) SIMULATION - You need to configure network connectivity between a virtual network named VNET1 and a virtual network named VNET2. The solution must ensure that virtual machines connected to VNET1 can communicate with virtual machines connected to VNET2. To complete this task, sign in to the Azure portal and modify the Azure resources.
The solution requires configuring VNet peering between VNET1 and VNET2, enabling one-way communication from VNET1 to VNET2. The suggested answer outlines the steps to achieve this: 1. Create a VNet peering between VNET1 and VNET2 in the Azure portal. 2. Ensure "Allow virtual network access from VNET1 to remote virtual network" (VNET2) is enabled. 3. Ensure "Allow virtual network access from remote network (VNET2) to VNET1" is disabled. This configuration allows VMs in VNET1 to communicate with VMs in VNET2 but prevents the reverse. **Why other options are incorrect:** The discussion reveals some conflicting approaches. While the suggested answer focuses on enabling one-way traffic using the peering settings, some users reported success using Network Security Groups (NSGs) to block traffic from VNET2 to VNET1. The disagreement stems from potentially different versions of the Azure portal or differing interpretations of the question's requirements (some users seem to expect bidirectional communication even when the question specifies only one way). The provided solution is consistent with the most-upvoted and accepted answer.
186
**** [View Question](https://www.examtopics.com/discussions/databricks/view/36664-exam-az-500-topic-5-question-41-discussion/) You need to enable Advanced Data Security for the SQLdb1 Azure SQL database. The solution must ensure that Azure Advanced Threat Protection (ATP) alerts are sent to [email protected]. To complete this task, sign in to the Azure portal and modify the Azure resources. How do you enable Advanced Data Security for SQLdb1 and configure ATP alerts to be sent to [email protected]? **
** There is some disagreement in the provided discussion regarding the exact steps. However, the most upvoted and recent responses suggest the following general approach: The most reliable method described in the discussion involves navigating to the Azure SQL server, not directly to the database. Then, within the server's Security or Security Center settings, you'll find the Advanced Threat Protection configuration. Here, you specify the email address ([email protected]) for receiving alerts. One commenter mentions that enabling Advanced Data Security at the database level will produce a warning recommending enabling it at the server level instead, unless specifically required. Another commenter describes an even more recent method (2022) utilizing "Defender for Cloud for SQL" with an "Upgrade" or "Skip" option that enables a security menu. **Why other options are incorrect (or less accurate):** The initial suggested answer focusing solely on the database-level configuration may be outdated or incomplete. The discussion highlights that Microsoft has changed the process over time. While some early steps (finding SQLdb1) might remain, the central configuration for ATP alerts appears to be managed at the Azure SQL *server* level, not the individual database level, based on the most recent and highly upvoted user feedback. **Note:** The provided solutions reflect a process that has evolved over time. The most accurate approach depends on the current Azure portal interface and version. Always consult the official Microsoft documentation for the most up-to-date instructions.
187
**** [View Question](https://www.examtopics.com/discussions/databricks/view/3791-exam-az-500-topic-2-question-3-discussion/) You have a hybrid configuration of Azure Active Directory (Azure AD). You have an Azure HDInsight cluster on a virtual network. You plan to allow users to authenticate to the cluster by using their on-premises Active Directory credentials. You need to configure the environment to support the planned authentication. Solution: You create a site-to-site VPN between the virtual network and the on-premises network. Does this meet the goal? A. Yes B. No **
** B. No **Explanation:** While a Site-to-Site VPN provides network connectivity, it does not directly enable authentication using on-premises Active Directory credentials. To achieve this, Azure AD Domain Services (Azure AD DS) is required. Azure AD DS synchronizes your on-premises Active Directory with Azure, allowing users to authenticate using their existing credentials within the Azure environment, including the HDInsight cluster. The VPN is a separate networking concern. The question focuses on authentication, not network connectivity. The discussion reveals disagreement on the correct answer. While some participants initially suggest that a VPN is sufficient, others correctly point out the necessity of Azure AD DS for authentication. The final consensus in the discussion leans towards "No" as the correct answer. **Why other options are incorrect:** A. Yes: This is incorrect because a VPN alone does not handle the authentication process. It only establishes network connectivity. Authentication requires a mechanism to verify on-premises credentials against the Azure environment; this is done through Azure AD DS, not solely via a VPN.
188
[View Question](https://www.examtopics.com/discussions/databricks/view/38453-exam-az-500-topic-4-question-22-discussion/) You have an Azure subscription that contains a user named Admin1 and a virtual machine named VM1. VM1 runs Windows Server 2019 and was deployed by using an Azure Resource Manager template. VM1 is the member of a backend pool of a public Azure Basic Load Balancer. Admin1 reports that VM1 is listed as Unsupported on the Just in Time VM access blade of Azure Security Center. You need to ensure that Admin1 can enable just in time (JIT) VM access for VM1. What should you do? A. Create and configure a network security group (NSG). B. Create and configure an additional public IP address for VM1. C. Replace the Basic Load Balancer with an Azure Standard Load Balancer. D. Assign an Azure Active Directory Premium Plan 1 license to Admin1.
The correct answer is **A. Create and configure a network security group (NSG).** Just-in-Time (JIT) VM access in Azure Security Center requires a Network Security Group (NSG) to be configured. The discussion explains that a missing NSG is a reason why a VM might be listed as "Unsupported" for JIT access. The other options are not directly related to enabling JIT VM access. Why other options are incorrect: * **B. Create and configure an additional public IP address for VM1:** Adding a public IP address doesn't directly address the requirement for an NSG which is necessary for JIT VM access. * **C. Replace the Basic Load Balancer with an Azure Standard Load Balancer:** The type of load balancer is irrelevant to JIT VM access. * **D. Assign an Azure Active Directory Premium Plan 1 license to Admin1:** The licensing of the user has no bearing on the VM's ability to support JIT access. The issue lies with the VM's configuration, not the user's permissions. Note: The provided discussion suggests that there are other possible reasons why a VM might be listed as "Unsupported," besides a missing NSG. However, based on the provided question and context, option A is the most direct and likely solution.
189
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38982-exam-az-500-topic-4-question-25-discussion/) You have an Azure Active Directory (Azure AD) tenant and a root management group. You create 10 Azure subscriptions and add the subscriptions to the root management group. You need to create an Azure Blueprints definition that will be stored in the root management group. What should you do first? A. Modify the role-based access control (RBAC) role assignments for the root management group. B. Add an Azure Policy definition to the root management group. C. Create a user-assigned identity. D. Create a service principal. **
** A. Modify the role-based access control (RBAC) role assignments for the root management group. **Explanation:** The discussion highlights a user's experience where they were unable to create a blueprint in the root management group due to insufficient permissions. Elevating their access to "Contributor" in the root management group resolved the issue. This strongly suggests that appropriate RBAC permissions are a prerequisite for creating Azure Blueprints. While other steps might be involved in a complete blueprint deployment, modifying RBAC is the necessary *first* step to even begin the process. **Why other options are incorrect:** * **B. Add an Azure Policy definition to the root management group:** Azure Policies and Blueprints are related but distinct. Policies govern resource creation and configuration, while Blueprints are used to deploy and manage entire sets of resources based on pre-defined artifacts. Policies are not a prerequisite for creating the blueprint definition itself. * **C. Create a user-assigned identity:** User-assigned managed identities are useful for authentication and authorization within Azure, but are not directly required to create a blueprint definition. The required permissions are managed through RBAC. * **D. Create a service principal:** Similar to user-assigned identities, service principals manage application access. This is not the initial step needed to create the blueprint definition. **Note:** The discussion shows some disagreement about whether the user's experience is directly applicable to the question, as the question doesn't specify the user's initial permissions. However, the overwhelming consensus, supported by Microsoft documentation referenced in the discussion, points towards the need for sufficient RBAC permissions before creating a blueprint.
190
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38985-exam-az-500-topic-2-question-48-discussion/) You have an Azure subscription. You enable Azure Active Directory (Azure AD) Privileged Identity Management (PIM). Your company's security policy for administrator accounts has the following conditions: ✑ The accounts must use multi-factor authentication (MFA). ✑ The accounts must use 20-character complex passwords. ✑ The passwords must be changed every 180 days. ✑ The accounts must be managed by using PIM. You receive multiple alerts about administrators who have not changed their password during the last 90 days. You need to minimize the number of generated alerts. Which PIM alert should you modify? A. Roles are being assigned outside of Privileged Identity Management B. Roles don't require multi-factor authentication for activation C. Administrators aren't using their privileged roles D. Potential stale accounts in a privileged role **
** D. Potential stale accounts in a privileged role **Explanation:** The question states that alerts are triggered because administrators haven't changed their passwords within 90 days. Option D, "Potential stale accounts in a privileged role," directly addresses this issue. A "stale account" implies an account that hasn't been actively used for a period, triggering the alerts. Modifying this alert's configuration (likely adjusting the inactivity period) would reduce the number of false positives related to password change frequency. **Why other options are incorrect:** * **A. Roles are being assigned outside of Privileged Identity Management:** This alert is unrelated to password changes. It focuses on ensuring that all privileged role assignments are managed through PIM. * **B. Roles don't require multi-factor authentication for activation:** This alert is concerned with MFA requirements, not password age. * **C. Administrators aren't using their privileged roles:** This alert focuses on account usage, not password changes. **Note:** The discussion highlights that the question is outdated (as of 09/2022) because the "Potential stale accounts" alert is no longer triggered based on the last password change date. The alert now triggers based on account inactivity. Therefore, while D remains the best answer *in the context of the original question*, it's crucial to note this discrepancy between the question's premise and the current functionality of the alert. The ideal solution would be to update the question to reflect the current behavior of the PIM alert.
191
[View Question](https://www.examtopics.com/discussions/databricks/view/38987-exam-az-500-topic-2-question-42-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains a user named User1. You plan to publish several apps in the tenant. You need to ensure that User1 can grant admin consent for the published apps. Which two possible user roles can you assign to User1 to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Security administrator B. Cloud application administrator C. Application administrator D. User administrator E. Application developer
B and C. The correct answer is Cloud Application Administrator (B) and Application Administrator (C). Both roles have the necessary permissions to grant admin consent for applications within the Azure AD tenant. The Cloud Application Administrator has broad permissions over enterprise applications, including granting admin consent. Similarly, the Application Administrator role allows management of applications and service principals, enabling admin consent granting. WHY OTHER OPTIONS ARE INCORRECT: * **A. Security administrator:** This role focuses on security-related tasks and doesn't inherently include the ability to grant admin consent for applications. * **D. User administrator:** This role manages users within the Azure AD tenant but lacks the application-specific permissions required for granting admin consent. * **E. Application developer:** This role is for developing applications, not for managing their permissions or granting admin consent. NOTE: The provided discussion shows a consensus on the correct answer.
192
[View Question](https://www.examtopics.com/discussions/databricks/view/40221-exam-az-500-topic-4-question-31-discussion/) You are collecting events from Azure virtual machines to an Azure Log Analytics workspace. You plan to create alerts based on the collected events. You need to identify which Azure services can be used to create the alerts. Which two services should you identify? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Azure Monitor B. Azure Security Center C. Azure Analysis Services D. Azure Sentinel E. Azure Advisor
The correct answer is A and D: Azure Monitor and Azure Sentinel. Both Azure Monitor and Azure Sentinel can be used to create alerts based on events collected in a Log Analytics workspace. Azure Monitor allows you to create custom alert rules based on specific criteria within your logs. Azure Sentinel, a Security Information and Event Management (SIEM) solution, also provides robust alert creation capabilities, often leveraging data from Azure Monitor and other sources for threat detection and response. Azure Security Center (B) automatically generates alerts based on its built-in security policies and Azure Defender settings, but doesn't offer the same level of customizable alert creation as Azure Monitor or Sentinel. Azure Analysis Services (C) and Azure Advisor (E) are not designed for alert creation based on events from virtual machines. There is a general consensus among the discussion participants that the correct answer is A and D. However, some users only mention A and D without further explanation.
193
**** [View Question](https://www.examtopics.com/discussions/databricks/view/40222-exam-az-500-topic-3-question-42-discussion/) You have an Azure Active Directory (Azure AD) tenant named Contoso.com and an Azure Kubernetes Service (AKS) cluster AKS1. You discover that AKS1 cannot be accessed by using accounts from Contoso.com. You need to ensure AKS1 can be accessed by using accounts from Contoso.com. The solution must minimize administrative effort. What should you do first? A. From Azure, recreate AKS1. B. From AKS1, upgrade the version of Kubernetes. C. From Azure AD, implement Azure AD Premium P2 D. From Azure AD, configure the User settings. **
** A. From Azure, recreate AKS1. **Explanation:** The discussion highlights that Azure AD integration with AKS is typically only possible during cluster creation. While newer options for post-creation integration exist, they involve manual configuration and are not the most efficient solution to minimize administrative effort. Recreating AKS1 with Azure AD integration enabled from the outset is the quickest and least complex method to resolve the access issue. **Why other options are incorrect:** * **B. From AKS1, upgrade the version of Kubernetes:** Upgrading Kubernetes doesn't directly enable Azure AD authentication. * **C. From Azure AD, implement Azure AD Premium P2:** Azure AD Premium P2 is not directly related to AKS authentication; it provides features unrelated to this problem. * **D. From Azure AD, configure the User settings:** Adjusting user settings in Azure AD won't grant access to an AKS cluster that isn't configured for Azure AD integration. **Note:** The discussion reveals some disagreement regarding the feasibility of post-creation Azure AD integration with AKS. While some comments suggest that it's possible with manual configuration (contradicting the "minimize administrative effort" requirement), the consensus among highly voted responses favors recreating the cluster as the best approach for minimizing effort.
194
**** [View Question](https://www.examtopics.com/discussions/databricks/view/40223-exam-az-500-topic-4-question-21-discussion/) You have an Azure subscription that contains 100 virtual machines and has Azure Defender enabled. You plan to perform a vulnerability scan of each virtual machine. You need to deploy the vulnerability scanner extension to the virtual machines by using an Azure Resource Manager template. Which two values should you specify in the code to automate the deployment of the extension to the virtual machines? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. the user-assigned managed identity B. the workspace ID C. the Azure Active Directory (Azure AD) ID D. the Key Vault managed storage account key E. the system-assigned managed identity F. the primary shared key **
** B and E The correct answer is B (the workspace ID) and E (the system-assigned managed identity). To deploy the vulnerability scanner extension via an ARM template, you need to specify the workspace ID to identify where the scan results should be stored. The system-assigned managed identity allows the virtual machines to authenticate with the necessary services to download and execute the extension without requiring individual credentials. **Why other options are incorrect:** * **A (the user-assigned managed identity):** While user-assigned managed identities are a valid approach for managing permissions, a system-assigned identity is simpler and more efficient for this scenario, given the scale of 100 VMs. The question doesn't specify a requirement for user-assigned identities, and system-assigned is generally preferred for this task unless more granular control is required. * **C (the Azure Active Directory (Azure AD) ID):** The Azure AD ID itself isn't directly used for deploying the extension. The system-assigned managed identity implicitly uses the tenant ID associated with the subscription. * **D (the Key Vault managed storage account key):** This is not necessary for deploying the extension; the managed identity handles authentication. * **F (the primary shared key):** Using shared keys is less secure and not best practice when using managed identities. **Note:** The discussion shows some disagreement and uncertainty on the best choice between system-assigned and user-assigned managed identities. However, the consensus leans towards the system-assigned identity being sufficient and more efficient for this large-scale deployment, making it the preferable answer in this context.
195
**** [View Question](https://www.examtopics.com/discussions/databricks/view/43772-exam-az-500-topic-3-question-49-discussion/) You have a web app hosted on an on-premises server that is accessed by using a URL of https://www.contoso.com. You plan to migrate the web app to Azure. You will continue to use https://www.contoso.com. You need to enable HTTPS for the Azure web app. What should you do first? A. Export the public key from the on-premises server and save the key as a P7b file. B. Export the private key from the on-premises server and save the key as a PFX file that is encrypted by using TripleDES. C. Export the public key from the on-premises server and save the key as a CER file. D. Export the private key from the on-premises server and save the key as a PFX file that is encrypted by using AES256. **
** B. Export the private key from the on-premises server and save the key as a PFX file that is encrypted by using TripleDES. To enable HTTPS for the Azure web app using an existing certificate, you need the private key. Option B correctly identifies this requirement and specifies the PFX format, which is commonly used for storing both the public and private keys securely. While AES256 is a stronger encryption algorithm (option D), the suggested answer indicates that TripleDES is acceptable in this context. **Why other options are incorrect:** * **A:** Only exporting the public key is insufficient. The private key is essential for decryption and secure communication. * **C:** Similar to A, exporting only the public key is not enough to enable HTTPS. * **D:** While AES256 is a stronger encryption algorithm, the suggested answer is B. The discussion highlights a disagreement on whether AES256 is the better option, but the consensus is that the correct exam answer is B. **Note:** The discussion shows a significant disagreement amongst users regarding the best practice (AES256 vs. TripleDES). While many advocate for AES256 as a more secure standard, the suggested answer and the exam seemingly prioritize the use of TripleDES in this specific scenario.
196
[View Question](https://www.examtopics.com/discussions/databricks/view/43871-exam-az-500-topic-5-question-25-discussion/) You have an Azure subscription that contains four Azure SQL managed instances. You need to evaluate the vulnerability of the managed instances to SQL injection attacks. What should you do first? A. Create an Azure Sentinel workspace. B. Enable Advanced Data Security. C. Add the SQL Health Check solution to Azure Monitor. D. Create an Azure Advanced Threat Protection (ATP) instance.
B. Enable Advanced Data Security (now Azure Defender for SQL). Explanation: Advanced Data Security (now rebranded as Azure Defender for SQL) provides comprehensive security capabilities, including vulnerability assessments and threat detection, directly addressing the need to evaluate SQL injection vulnerabilities. Enabling this feature allows for immediate identification of potential SQL injection weaknesses within the managed instances. Why other options are incorrect: * **A. Create an Azure Sentinel workspace:** While Azure Sentinel is a valuable security information and event management (SIEM) system, it's not the *first* step for assessing SQL injection vulnerabilities. It requires data ingestion and configuration before providing relevant insights. * **C. Add the SQL Health Check solution to Azure Monitor:** Azure Monitor's SQL Health Check primarily focuses on the operational health of the SQL instances, not directly on security vulnerabilities like SQL injection. * **D. Create an Azure Advanced Threat Protection (ATP) instance:** Azure ATP (now integrated into Microsoft Defender for Cloud) is focused on broader threat protection across Azure resources, not specifically tailored to immediate SQL injection vulnerability assessment. While related, it is a broader solution and not the most direct initial step. Note that several commenters in the discussion point out that ATP functionality is now part of Azure Defender for SQL. Note: There is some disagreement in the discussion regarding the best approach. Some users argue for option D, but the suggested answer and general consensus lean towards option B as the most direct and efficient first step for evaluating SQL injection vulnerabilities. The renaming of the service from "Advanced Data Security" to "Azure Defender for SQL" is also acknowledged.
197
[View Question](https://www.examtopics.com/discussions/databricks/view/43957-exam-az-500-topic-2-question-51-discussion/) You have an Azure subscription. You plan to create a custom role-based access control (RBAC) role that will provide permission to read an Azure Storage account. Which property of the RBAC role definition should you configure? A. NotActions [] B. DataActions [] C. AssignableScopes [] D. Actions []
D. Actions [] Explanation: To grant read access to an Azure Storage account at the account level, you should configure the `Actions` property of the RBAC role definition. The `Actions` array specifies management operations allowed on the Azure resource (in this case, the storage account). While `DataActions` deals with data-level permissions within a storage account (e.g., blobs), the question asks for account-level read access, which is controlled by `Actions`. Why other options are incorrect: * **A. NotActions []:** This property is used to explicitly deny specific actions, not grant them. * **B. DataActions []:** This property grants permissions to perform operations *on data within* the storage account (e.g., reading specific blobs), not on the storage account itself. * **C. AssignableScopes []:** This defines where the role can be assigned (scope), not the permissions granted by the role. Note: There is some disagreement in the discussion regarding the best approach. Some comments suggest that `DataActions` might be relevant depending on the specific access level desired. However, based on the question's wording focusing on granting read access to the *storage account* itself (not individual blobs or data within it), the most appropriate answer is `Actions`.
198
[View Question](https://www.examtopics.com/discussions/databricks/view/44167-exam-az-500-topic-4-question-45-discussion/) You have an Azure subscription named Subscription1. You need to view which security settings are assigned to Subscription1 by default. Which Azure policy or initiative definition should you review? A. the Audit diagnostic setting policy definition B. the Enable Monitoring in Azure Security Center initiative definition C. the Enable Azure Monitor for VMs initiative definition D. the Azure Monitor solution 'Security and Audit' must be deployed policy definition
The provided options are all incorrect according to the discussion. The correct method to view default security settings for Subscription1 is to navigate to Azure Security Center -> Security Policies. The discussion highlights that none of the listed options accurately reflect the process. Several users point out that option B ("the Enable Monitoring in Azure Security Center initiative definition") doesn't accurately describe the location of default policies, and also may not even exist, depending on the preview state at the time of the exam. There is significant disagreement amongst users regarding the correct answer and the validity of the question itself. The suggested answer (B) is widely refuted in the discussion.
199
**** [View Question](https://www.examtopics.com/discussions/databricks/view/44244-exam-az-500-topic-3-question-18-discussion/) SIMULATION - You need to ensure that connections from the Internet to VNET1\subnet0 are allowed only over TCP port 7777. The solution must use only currently deployed resources. To complete this task, sign in to the Azure portal. **
** To allow only TCP port 7777 connections from the internet to VNET1\subnet0, you must configure the Network Security Group (NSG) associated with subnet0 in the Azure portal. This involves creating a new inbound security rule with the following settings: * **Source:** Service Tag: Internet * **Source port ranges:** * (Any) * **Destination:** *(This point is debated in the discussion. The suggested answer uses "Any," while Viggy1212 argues for using only the subnet0 CIDR notation for a more precise solution.)* * **Destination port ranges:** 7777 * **Protocol:** TCP * **Action:** Allow * **Priority:** A suitably high priority (e.g., 100) to ensure it's processed before other rules. * **Name:** A descriptive name (e.g., Allow_TCP_7777_from_Internet) The steps involve navigating to the NSG associated with subnet0 through the Azure portal, adding a new inbound security rule, and configuring it with these settings. Note that the correct destination setting is debated in the discussion, with one user suggesting using "Any" and another advocating for using the subnet0's CIDR notation for more precise control. The provided suggested answer uses "Any". **Why other options are incorrect (or less precise):** There's no mention of other explicit options in the provided text. However, implicitly, any rule that doesn't restrict the source to "Internet" or the destination port to 7777, or that doesn't specify TCP as the protocol would be incorrect, as it would not fulfill the requirements stated in the question. Using a CIDR notation for the destination instead of "Any" would be more precise but is not explicitly confirmed as the only correct method in the provided information. The discussion highlights this point of contention.
200
[View Question](https://www.examtopics.com/discussions/databricks/view/44362-exam-az-500-topic-2-question-49-discussion/) Your network contains an on-premises Active Directory domain named adatum.com that syncs to Azure Active Directory (Azure AD). Azure AD Connect is installed on a domain member server named Server1. You need to ensure that a domain administrator for the adatum.com domain can modify the synchronization options. The solution must use the principle of least privilege. Which Azure AD role should you assign to the domain administrator? A. Security administrator B. Global administrator C. User administrator
B. Global administrator The discussion indicates that modifying Azure AD Connect synchronization options requires a Global administrator role in Azure AD, even for on-premises domain administrators. This is because these settings affect the entire Azure AD tenant. While seemingly counterintuitive to the principle of least privilege, it's presented as the minimum necessary role to accomplish the task. Why other options are incorrect: * **A. Security administrator:** This role doesn't have the necessary permissions to modify Azure AD Connect synchronization options. * **C. User administrator:** This role is too restrictive; it lacks the authority to manage synchronization settings. Note: The discussion shows a consensus that the Global Administrator role is required, despite the apparent conflict with the "principle of least privilege." There is no presented alternative solution that would allow for less privileged access.
201
[View Question](https://www.examtopics.com/discussions/databricks/view/44518-exam-az-500-topic-4-question-44-discussion/) You have an Azure resource group that contains 100 virtual machines. You have an initiative named Initiative1 that contains multiple policy definitions. Initiative1 is assigned to the resource group. You need to identify which resources do NOT match the policy definitions. What should you do? A. From Azure Security Center, view the Regulatory compliance assessment. B. From the Policy blade of the Azure Active Directory admin center, select Compliance. C. From Azure Security Center, view the Secure Score. D. From the Policy blade of the Azure Active Directory admin center, select Assignments.
A. From Azure Security Center, view the Regulatory compliance assessment. Explanation: The question asks how to identify resources that *do not* match policy definitions within a resource group. Regulatory Compliance assessment in Azure Security Center provides a view of resource compliance with assigned policies, showing which resources are non-compliant. Why other options are incorrect: * **B. From the Policy blade of the Azure Active Directory admin center, select Compliance:** This is incorrect. Azure Active Directory (Azure AD) focuses on identity and access management, not resource-level policy compliance within a resource group. While the Azure portal is accessed via portal.azure.com, the Azure AD admin center is a separate section, accessible via a different URL. The discussion highlights the confusion around this point. * **C. From Azure Security Center, view the Secure Score:** Secure Score provides an overall security posture assessment. While related to security, it doesn't pinpoint specific resource non-compliance with defined policies. * **D. From the Policy blade of the Azure Active Directory admin center, select Assignments:** This shows policy assignments, but not the compliance status of individual resources against those assignments. Note: The discussion shows considerable disagreement on the correct answer. Some users incorrectly advocate for option B, mistakenly believing the Azure AD admin center handles resource-level policy compliance. The correct answer, A, is supported by multiple users who correctly identify Azure Security Center's Regulatory Compliance assessment as the appropriate tool.
202
[View Question](https://www.examtopics.com/discussions/databricks/view/45345-exam-az-500-topic-3-question-58-discussion/) You have an Azure subscription that contains two virtual machines named VM1 and VM2 that run Windows Server 2019. You are implementing Update Management in Azure Automation. You plan to create a new update deployment named Update1. You need to ensure that Update1 meets the following requirements: ✑ Automatically applies updates to VM1 and VM2. ✑ Automatically adds any new Windows Server 2019 virtual machines to Update1. What should you include in Update1? A. a security group that has a Membership type of Assigned B. a security group that has a Membership type of Dynamic Device C. a dynamic group query D. a Kusto query language query
The correct answer is **C. a dynamic group query**. Update Management in Azure Automation allows you to target groups of Azure VMs (or non-Azure VMs) for update deployments. A dynamic group, defined by a query evaluated at deployment time, is the appropriate method to meet both requirements. It automatically includes VM1 and VM2 initially and automatically adds any new Windows Server 2019 VMs that are added to the Azure subscription. Why other options are incorrect: * **A. a security group that has a Membership type of Assigned:** This is a static group. Adding new VMs would require manual updates to the security group's membership. It does not meet the requirement of automatically adding new VMs. * **B. a security group that has a Membership type of Dynamic Device:** This relates to Azure AD groups, but Update Management doesn't directly use Azure AD groups for targeting updates. It uses dynamic queries based on Azure VM metadata. * **D. a Kusto query language query:** Kusto Query Language (KQL) is used with Azure Log Analytics, not for defining update deployment groups within Update Management. Note: The provided discussion shows unanimous agreement on the correct answer.
203
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46361-exam-az-500-topic-4-question-19-discussion/) You have an Azure subscription that contains the Azure Log Analytics workspaces shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035700001.png) (Table shows Workspace1 and Workspace2) You create the virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035700002.png) (Table shows VM1 connected to Workspace1, VM2 connected to Workspace2, VM3 connected to Workspace1, VM4 connected to Workspace2. All VMs are Windows VMs.) You plan to use Azure Sentinel to monitor Windows Defender Firewall on the virtual machines. Which virtual machines can you connect to Azure Sentinel? A. VM1 only B. VM1 and VM3 only C. VM1, VM2, VM3, and VM4 D. VM1 and VM2 only **
** C. VM1, VM2, VM3, and VM4 **Explanation:** While each VM is initially connected to a specific Log Analytics workspace, Azure Sentinel can monitor data from multiple workspaces. The Log Analytics agent (or MMA) on each VM can be configured to forward data to multiple workspaces. Therefore, even though VM2 and VM4 are connected to Workspace2, their Windows Defender Firewall data can be configured to be sent to the workspace Azure Sentinel uses (potentially Workspace1, as mentioned in one of the discussions, but not necessarily). This allows Azure Sentinel to monitor all four VMs. **Why other options are incorrect:** * **A. VM1 only:** Incorrect because Azure Sentinel can collect data from multiple workspaces. * **B. VM1 and VM3 only:** Incorrect, for the same reason as A. VM2 and VM4 can also be connected. * **D. VM1 and VM2 only:** Incorrect, for the same reason as A and B. VM3 and VM4 can also be connected. **Note:** The discussion highlights that while sending duplicate data to multiple workspaces incurs extra charges, it is technically feasible to connect all VMs to Azure Sentinel. There is a consensus in the discussion that option C is correct.
204
[View Question](https://www.examtopics.com/discussions/databricks/view/46367-exam-az-500-topic-3-question-51-discussion/) DRAG DROP - You are configuring network connectivity for two Azure virtual networks named VNET1 and VNET2. You need to implement VPN gateways for the virtual networks to meet the following requirements: ✑ VNET1 must have six site-to-site connections that use BGP. ✑ VNET2 must have 12 site-to-site connections that use BGP. ✑ Costs must be minimized. Which VPN gateway SKU should you use for each virtual network? To answer, drag the appropriate SKUs to the correct networks. Each SKU may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: (Image shows a drag-and-drop interface with options for various VPN Gateway SKUs)
VNET1 should use the **Standard** VPN Gateway SKU, and VNET2 should use the **HighPerformance** VPN Gateway SKU. The question specifies that costs must be minimized while supporting a high number of BGP site-to-site connections. The Standard SKU supports up to 10 BGP connections, sufficient for VNET1's six connections. The HighPerformance SKU supports a higher number of connections, meeting VNET2's requirement of 12 BGP connections. Using the HighPerformance SKU for VNET2 is more cost-effective than using multiple Standard SKUs. Choosing the minimum SKU necessary for each requirement minimizes costs as instructed. The discussion highlights that remembering the specifics of Azure SKUs and their capabilities is challenging due to the large number of options and frequent updates. This reinforces the importance of understanding the basic guidelines and limitations of each SKU to efficiently solve this type of problem. Other options are incorrect because they either fail to meet the connection requirement (e.g., using a Basic SKU, which doesn't support BGP), or they would be unnecessarily expensive by using a larger capacity SKU than needed.
205
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46374-exam-az-500-topic-2-question-27-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains the users shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0009600001.jpg) *(Image shows a table of users, User1, User2, User3, belonging to different groups)* From Azure AD Privileged Identity Management (PIM), you configure the settings for the Security Administrator role as shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0009700001.jpg) *(Image shows PIM settings, including a 5-hour activation time and 3-month expiration time)* From PIM, you assign the Security Administrator role to the following groups: ✑ Group1: Active assignment type, permanently assigned ✑ Group2: Eligible assignment type, permanently eligible For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0009800003.png) *(Image shows a table with three statements to be evaluated as Yes or No: Box 1: User1 can activate the role in 5 hours. Box 2: User2 can activate the role. Box 3: User3 can activate the role.)* **
** Box 1: No. User1 is a member of Group1, which has an active, permanently assigned role. Therefore, User1 already possesses the Security Administrator role; there is no activation needed. Box 2: Yes. User2 is a member of Group2, which has an eligible, permanently eligible role. While there's a 5-hour activation window, User2 can activate the role. No approval is needed according to the provided information. The activation time is not the expiration time of the role. The 3-month expiration applies after activation. Box 3: No. User3 is a member of both Group1 (active, permanently assigned) and Group2 (eligible, permanently eligible). However, since User3 already has permanent access via Group1's active assignment, attempting to activate the role via Group2 would result in an error. Activating an already active role isn't possible. **Why other options are incorrect:** The discussion highlights some confusion regarding the 5-hour timeframe. Some users initially interpreted it as the role's expiration time, but the correct interpretation is the activation time for eligible assignments. The crucial point is understanding the difference between "active" and "eligible" assignments within Azure AD PIM. An active assignment grants immediate access; an eligible assignment requires activation. The disagreement in the discussion reflects this initial misinterpretation.
206
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46459-exam-az-500-topic-4-question-20-discussion/) You have an Azure subscription that contains a user named Admin1 and a resource group named RG1. In Azure Monitor, you create the alert rules shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035800001.png) Admin1 performs the following actions on RG1: ✑ Adds a virtual network named VNET1 ✑ Adds a Delete lock named Lock1 Which rules will trigger an alert as a result of the actions of Admin1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035900001.jpg) **
** Rule 2 and Rule 4 will trigger. **Explanation:** Adding a virtual network (VNET1) and adding a delete lock (Lock1) are both considered administrative actions within Azure Resource Manager. Rule 2 and Rule 4 are configured to trigger on administrative actions within the resource group RG1. Therefore, both actions by Admin1 will trigger alerts in these two rules. The image of the suggested answer confirms this. **Why other options are incorrect:** There is no information provided about other potential rules that might trigger. The question focuses specifically on the provided alert rules, and the discussion confirms the selection of Rule 2 and Rule 4 as correct. **Note:** The discussion shows some disagreement on the specifics, however the general consensus and the suggested answer converge on Rules 2 and 4 as the correct answer.
207
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46499-exam-az-500-topic-4-question-16-discussion/) You have an Azure subscription that contains the alerts shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035000001.jpg) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035100001.png) **
** The provided images show a multiple-choice question about changing the state of Azure alerts. Based on the discussion, there's disagreement on the correct answer. However, the consensus from multiple users who claim to have tested this is that an Azure alert can be changed to *any* state (New, Acknowledged, or Closed) regardless of its current state. Therefore, Alert1 (Acknowledged) can be changed to New or Closed, and Alert2 (Closed) can be changed to New or Acknowledged. **Why other options are incorrect (or potentially incorrect, given the discussion):** The suggested answer in the original post and some responses in the discussion suggest limitations on changing alert states. However, the majority of the discussion and the testing conducted by multiple users indicate that there are no such restrictions. The original suggested answer is therefore likely incorrect. **Note:** There is significant disagreement and conflicting information within the provided discussion regarding the correct answer to this question. While the answer provided here reflects the consensus from users who claim to have tested the functionality, it's important to acknowledge this uncertainty. The provided Microsoft documentation link in the discussion further supports this ambiguity.
208
[View Question](https://www.examtopics.com/discussions/databricks/view/46517-exam-az-500-topic-5-question-28-discussion/) You have an Azure subscription named Sub1 that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0049800001.png) You need to ensure that you can provide VM1 with secure access to a database on SQL1 by using a contained database user. What should you do? A. Enable a managed identity on VM1. B. Create a secret in KV1. C. Configure a service endpoint on SQL1. D. Create a key in KV1.
A. Enable a managed identity on VM1. A managed identity provides a secure way for Azure services (like a VM) to access other Azure resources (like a SQL database) without requiring secrets like passwords or connection strings. A contained database user in SQL Server allows authentication based on the managed identity, thereby providing secure access. Why other options are incorrect: * **B. Create a secret in KV1:** While Key Vault can store secrets, this approach is less secure than using managed identities. It requires managing and rotating secrets, increasing the risk of exposure. The question specifically requests a *secure* access method. * **C. Configure a service endpoint on SQL1:** A service endpoint enhances network security but doesn't directly address the authentication requirement for accessing the database. It's a complementary security measure but not the primary solution. * **D. Create a key in KV1:** Keys in Key Vault are for encryption and decryption, not for authentication. This is irrelevant to the problem of securely accessing a database. Note: The discussion shows some disagreement on the correct answer. While the suggested answer and highly upvoted responses favor option A, some users initially suggested option B. The provided explanation clarifies why option A is the more secure and appropriate approach for this scenario, especially given the requirement for using a contained database user.
209
[View Question](https://www.examtopics.com/discussions/databricks/view/46526-exam-az-500-topic-2-question-31-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains the users shown in the following table. | User Name | Group1 | Group2 | Sign-in Risk Level | MFA Enabled | |---------------|---------|---------|----------------------|-------------| | User1 | Yes | No | High | No | | User2 | Yes | Yes | High | No | | User3 | No | Yes | Medium | No | You create and enforce an Azure AD Identity Protection sign-in risk policy that has the following settings: * Assignments: Include Group1, exclude Group2 * Conditions: Sign-in risk level: Medium and above * Access: Allow access, Require multi-factor authentication You need to identify what occurs when the users sign in to Azure AD. What should you identify for each user?
* **User1:** Prompted for MFA registration. The policy includes Group1, User1 is in Group1, and their sign-in risk is High. The policy allows access but requires MFA. Since User1 does not have MFA enabled, they will be prompted to register for and complete MFA. * **User2:** Blocked. User2 is in both Group1 and Group2. The policy excludes Group2, therefore, despite the high risk and being in Group1, the exclusion overrides the inclusion. * **User3:** Allowed access. User3 is not in Group1, so the policy does not apply to them. **Why other options are incorrect:** The discussion highlights some disagreement on the specific behavior when MFA is not enabled and the policy *requires* MFA. Some argue the user will be blocked, others that they'll be prompted to register. The provided answer reflects the scenario where the user is *prompted* to register for MFA; however, a configuration where a user is blocked instead is possible depending on specific tenant settings. The answer prioritizes the more common behavior based on the question and documentation.
210
[View Question](https://www.examtopics.com/discussions/databricks/view/46531-exam-az-500-topic-12-question-3-discussion/) HOTSPOT - You need to deploy Microsoft Antimalware to meet the platform protection requirements. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0020600002.jpg) *(Image shows two boxes, Box 1 and Box 2, with empty options)* Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0020700001.jpg) *(Image shows Box 1 selected as "DeployIfNotExists" and Box 2 selected as "The Scope")* Scenario: Microsoft Antimalware must be installed on the virtual machines in RG1. RG1 is a resource group that contains Vnet1, VM0, and VM1. Box 1: DeployIfNotExists - DeployIfNotExists executes a template deployment when the condition is met. Azure policy definition Antimalware Incorrect Answers: Append: Append is used to add additional fields to the requested resource during creation or update. A common example is adding tags on resources such as costCenter or specifying allowed IPs for a storage resource. Deny: Deny is used to prevent a resource request that doesn't match defined standards through a policy definition and fails the request. Box 2: The Create a Managed Identity setting When Azure Policy runs the template in the deployIfNotExists policy definition, it does so using a managed identity. Azure Policy creates a managed identity for each assignment, but must have details about what roles to grant the managed identity. Reference: https://docs.microsoft.com/en-us/azure/governance/policy/concepts/effects
The correct answer is to select **DeployIfNotExists** for Box 1 and **Scope (Resource Group)** for Box 2. * **Box 1: DeployIfNotExists:** This effect is appropriate because it will deploy the Antimalware template only if the condition (lack of Antimalware) is met on the target VMs. It avoids unnecessary deployments if Antimalware is already present. * **Box 2: Scope (Resource Group):** The scope needs to be set to the Resource Group (RG1) to ensure that the policy is applied to all virtual machines within that resource group. While the discussion shows some disagreement on the exact terminology ("Create a Managed Identity setting" vs. "Scope"), the consensus points to the need for specifying the scope at the Resource Group level to target VM0 and VM1 correctly. A managed identity is *required* for DeployIfNotExists to function, but it's implicitly handled by Azure Policy; the user doesn't explicitly create it. The core issue is defining *where* the policy applies, which is the scope. **Why other options are incorrect:** * **Append:** Append is used to add information to existing resources, not to deploy new ones like Antimalware. * **Deny:** Deny prevents actions, it doesn't deploy software. * The discussion shows some confusion regarding the exact wording for Box 2, with some suggesting "Create a Managed Identity setting" and others correctly identifying the correct answer as "Scope." However, the overall consensus is that the scope needs to be defined to the RG1 level.
211
[View Question](https://www.examtopics.com/discussions/databricks/view/46538-exam-az-500-topic-2-question-33-discussion/) You work at a company named Contoso, Ltd. that has the offices shown in the following table. **Office Location | IP Address Range** ---|---| London | 192.168.1.0/24 Boston | 10.0.0.0/8 Seattle | 172.16.0.0/16 Contoso has an Azure Active Directory (Azure AD) tenant named contoso.com. All contoso.com users have Azure Multi-Factor Authentication (MFA) enabled. The tenant contains the users shown in the following table. **User Name | Location** ---|---| User1 | London User2 | Boston User3 | Seattle The multi-factor authentication settings for contoso.com are configured as shown in the following image (image shows a checkbox labeled "Skip multi-factor authentication for trusted locations" which is unchecked, and a text box for entering trusted IP ranges, which is empty). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 1. User1 will be prompted for MFA. 2. User2 will be prompted for MFA. 3. User3 will be prompted for MFA.
No, Yes, Yes. Explanation: * **User1 (London):** The "Skip multi-factor authentication for trusted locations" checkbox is unchecked. While the text box is available for entering trusted IP ranges, it is currently empty. Therefore, MFA is required for all users regardless of location, including User1. * **User2 (Boston):** The same logic applies as above. Since the checkbox is unchecked, and no IP ranges are specified, User2 will be prompted for MFA. * **User3 (Seattle):** Again, because the checkbox is unchecked and no IP ranges are specified, User3 will also be prompted for MFA. **Why other options are incorrect:** The discussion shows disagreement on the correct answer, with some believing the answer should be No-No-Yes based on a misunderstanding of the functionality of the checkbox and text box. The correct interpretation is that, with the checkbox *unchecked*, the system requires MFA regardless of the IP address or range entered in the textbox. The checkbox is for skipping MFA for *federated* users from trusted locations, not all users from a defined range. The options that incorrectly suggest No-No-Yes or No-Yes-Yes show a misinterpretation of the configuration.
212
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46542-exam-az-500-topic-3-question-59-discussion/) You have the Azure virtual machines shown in the following table. | VM Name | Status | |---|---| | VM1 | Running | | VM2 | Running | | VM3 | Stopped | | VM4 | Running | For which virtual machines can you enable Update Management? A. VM2 and VM3 only B. VM2, VM3, and VM4 only C. VM1, VM2, and VM4 only D. VM1, VM2, VM3, and VM4 E. VM1, VM2, and VM3 only **
** C. VM1, VM2, and VM4 only Update Management requires the virtual machine to be running to enable the agent and communicate with Azure Automation. VM3 is stopped, preventing the initial onboarding process and therefore Update Management cannot be enabled. VM1, VM2, and VM4 are running and can have Update Management enabled. **Why other options are incorrect:** * **A. VM2 and VM3 only:** Incorrect because VM3 is stopped. * **B. VM2, VM3, and VM4 only:** Incorrect because VM3 is stopped. * **D. VM1, VM2, VM3, and VM4:** Incorrect because VM3 is stopped. * **E. VM1, VM2, and VM3 only:** Incorrect because VM3 is stopped. **Note:** There is a disagreement in the discussion regarding whether Update Management can be enabled on a stopped VM *after* initial onboarding. Some sources claim that while initial onboarding requires a running VM, updates can be assessed and applied even when the VM is stopped. The question however asks about enabling Update Management, which based on the majority and the most compelling arguments, needs a running VM.
213
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46543-exam-az-500-topic-13-question-2-discussion/) You are evaluating the security of the network communication between the virtual machines in Sub2. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0021700001.jpg) NSG1, NSG2, NSG3, and NSG4 have the outbound security rules shown in the following table: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0021800001.png) Note: Sub2 contains the virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0021900001.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0021900002.png) **
** Yes, Yes, No **Explanation:** * **Box 1 (Can you ping the public IP address of VM2 from VM1?): Yes.** The outbound rules for NSG1 (attached to VM1) allow all outbound traffic. Therefore, pinging the public IP of VM2 (which is outbound traffic from VM1's perspective) will succeed. * **Box 2 (Can VM3 ping VM4?): Yes.** VM3 is in Subnet12, which has no NSG attached. Traffic within the same VNet, and without an intervening NSG, is permitted by default. * **Box 3 (Can VM4 ping VM5?): No.** VM5 resides in a separate VNet. Without VNet peering or other explicit connectivity mechanisms, communication between VMs in different VNets is not possible. **Why other options are incorrect:** The discussion shows some disagreement, particularly regarding Box 1. Some argue that since the destination is a public IP, the source would appear to be the internet, and therefore be blocked. However, the provided NSG rules clearly show *outbound* access is permitted for all traffic from VM1. The key is understanding the direction of the traffic – it's outbound from VM1, not inbound. The suggested answer correctly reflects the functioning of the depicted NSG configuration.
214
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46674-exam-az-500-topic-2-question-32-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains the users shown in the following table. (See image 1 below) In Azure AD Privileged Identity Management (PIM), the Role settings for the Contributor role are configured as shown in the exhibit. (See image 2 below) You assign users the Contributor role on May 1, 2019 as shown in the following table. (See image 3 below) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (See image 4 below) **Image 1:** (This image contains a table of users, their email addresses, and whether MFA is enabled or disabled. The exact data is not provided in the text.) **Image 2:** (This image shows the Azure AD PIM role settings for the Contributor role. The exact settings are not provided in the text. It's implied that the settings relate to eligibility and activation durations for the PIM role.) **Image 3:** (This image shows a table assigning users the Contributor role on May 1, 2019. The exact data is not provided in the text.) **Image 4:** (This image is a multiple choice question that is not described in the text, beyond the fact it asks to select "Yes" or "No" for each statement. The statements themselves are not included in the provided text.) **
** The provided text does not give the statements from Image 4, making it impossible to provide a definitive answer. The discussion highlights a key point of contention: the impact of MFA settings (enabled/disabled) on PIM role assignments. `hang10z` argues that MFA settings are irrelevant because PIM usage necessitates licenses (EMS E5 or P2 AD) that inherently require MFA setup. `rgullini` counters that this depends on whether "Security defaults" are enabled in Azure AD. If disabled, then the MFA configuration applies. Therefore, a correct answer depends entirely on the unspecified statements in Image 4 and the underlying configuration of "Security Defaults" within the Azure AD tenant. Without this information, a precise answer cannot be given. **WHY OTHER OPTIONS ARE INCORRECT (N/A):** Cannot assess the correctness of other options due to insufficient information provided in the prompt.
215
[View Question](https://www.examtopics.com/discussions/databricks/view/46700-exam-az-500-topic-4-question-75-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0042800002.png) You create the Azure Storage accounts shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0042900001.png) You need to configure auditing for SQL1. Which storage accounts and Log Analytics workspaces can you use as the audit log destination? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0042900002.jpg)
The correct answer is Storage Account 2 and Log Analytics workspaces 1, 2, and 3. **Explanation:** * **Storage Account:** SQL1 is located in the East US region. The Azure documentation states that for deployments from the Azure portal, the storage account must be in the same region as the database. Storage Account 1 and 2 are both in the East US region, making them valid options. However, there's a discussion about the storage tier. Some users argue that Storage Account 1 ("Cool" tier) is less suitable due to frequent access needs for audit logs, making Storage Account 2 (presumably a "Hot" or "Premium" tier) the better choice. There is disagreement on this point. * **Log Analytics Workspace:** Log Analytics workspaces can be in any region, independent of the database location. Therefore, Log Analytics workspaces 1, 2, and 3 are all valid options. **Why other options are incorrect (or debated):** * **Storage Account 1 (potentially):** While geographically valid, the discussion highlights concerns about its "Cool" storage tier being less optimal for the frequent access potentially required by audit logs, making Storage 2 preferable for performance reasons. The suitability of Storage 1 is debated amongst the users. It's important to note that there's some debate in the discussion regarding the optimal storage tier for audit logs. The answer reflects the consensus that both Storage Account 1 and 2 are technically valid due to regional alignment, however, Storage Account 2 is argued to be preferred due to performance.
216
[View Question](https://www.examtopics.com/discussions/databricks/view/46702-exam-az-500-topic-4-question-13-discussion/) You have 10 virtual machines on a single subnet that has a single network security group (NSG). You need to log the network traffic to an Azure Storage account. What should you do? A. Install the Network Performance Monitor solution. B. Create an Azure Log Analytics workspace. C. Enable diagnostic logging for the NSG. D. Enable NSG flow logs.
The correct answer is **D. Enable NSG flow logs.** NSG flow logs capture information about network traffic allowed or denied by the NSG. This data is then sent to a storage account for later analysis. This directly addresses the requirement of logging network traffic. The discussion reveals some disagreement regarding the question's original phrasing. Some users claim the question originally asked for *two* actions, and suggested enabling Azure Network Watcher in addition to NSG flow logs. However, the question presented here only asks for a single action. Therefore, only enabling NSG flow logs is the correct answer based on the provided text. Why other options are incorrect: * **A. Install the Network Performance Monitor solution:** While Network Performance Monitor can provide network performance insights, it doesn't directly log the network traffic allowed/denied by an NSG to a storage account. * **B. Create an Azure Log Analytics workspace:** A Log Analytics workspace is a data repository, but it doesn't capture network traffic data on its own. It requires a data source, such as NSG flow logs, to collect this information. * **C. Enable diagnostic logging for the NSG:** This might log some NSG-related events, but it is not specifically designed for logging network traffic flow.
217
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46749-exam-az-500-topic-4-question-24-discussion/) You have an Azure subscription that contains the resources shown in the following table. (Image showing a table of Azure resources, including VM5 with a private IP of 10.1.0.4 and no public IP.) VM5 has just in time (JIT) VM access configured. (Image showing JIT VM access configuration for VM5.) You enable JIT VM access for VM5. NSG1 has the inbound rules shown in the following exhibit. (Image showing NSG1 inbound rules, including rule 100 allowing RDP from a specific source.) For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. If rule 100 is manually deleted, access will not work. 2. RDP is blocked because rule 100 is not in place. 3. Azure Bastion host enables RDP from the internet to VM5. **
** 1. **Yes:** If rule 100 is deleted, RDP access to VM5 will be blocked because it's the only rule allowing RDP traffic from the specified source. JIT access requires the underlying network security rules to permit the traffic. 2. **No:** RDP is *not* blocked. Rule 100 explicitly allows RDP traffic. The question is worded to suggest that rule 100 is *not* in place, which is false. 3. **No:** Azure Bastion allows secure RDP access to VMs without public IPs, but *not* directly from the internet. It provides a secure jump-box to the virtual machine. **Explanation of why other options are incorrect:** The discussion highlights a disagreement on statement 2. While the provided answer states RDP is *not* blocked, JBAnalyst implies disagreement suggesting that the answer might not be correct. However, based solely on the provided images and information, the answer above remains the most accurate interpretation. The images show that rule 100 is in place and explicitly allows RDP. Therefore, it's unlikely that RDP is blocked unless other configurations or rules are not shown. The disagreement doesn't invalidate the answer based on the provided evidence.
218
[View Question](https://www.examtopics.com/discussions/databricks/view/46780-exam-az-500-topic-4-question-18-discussion/) HOTSPOT - You have the hierarchy of Azure resources shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035400001.png) You create the Azure Blueprints definitions shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035400002.png) To which objects can you assign Blueprint1 and Blueprint2? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035500001.png) Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0035600001.png)
Blueprint1 can be assigned to the Tenant Root Group, ManagementGroup1, and Subscription1. Blueprint2 can only be assigned to Subscription1. Explanation: While the suggested answer states that blueprints can only be assigned to subscriptions, the discussion highlights that this is not entirely accurate. According to the provided Microsoft documentation and user comments, published blueprint versions can be assigned to management groups *via the REST API*. However, the Azure portal does not provide this functionality. The question does not specify the method of assignment (portal or REST API). Therefore, considering both portal and REST API methods, the more complete answer reflects the broader assignment possibilities. Why other options are incorrect: The original suggested answer is too restrictive, as it ignores the REST API assignment method for management groups. A more complete and accurate answer needs to take into account both assignment methods to be fully correct. Note: There is a disagreement in the discussion regarding the assignment of blueprints to management groups. While the portal does not support this, the REST API does. The answer provided here reflects the most comprehensive understanding based on the available information.
219
[View Question](https://www.examtopics.com/discussions/databricks/view/46798-exam-az-500-topic-16-question-1-discussion/) You need to ensure that you can meet the security operations requirements. What should you do first? A. Turn on Auto Provisioning in Security Center. B. Integrate Security Center and Microsoft Cloud App Security. C. Upgrade the pricing tier of Security Center to Standard. D. Modify the Security Center workspace configuration.
C. Upgrade the pricing tier of Security Center to Standard. The suggested answer is C, upgrading to the Standard tier of Azure Security Center (ASC). The reasoning provided in the discussion highlights that while some basic security features are available in the free tier, the Standard tier offers more extensive capabilities for customizing operating system security configurations and meeting more comprehensive security operational requirements. Therefore, upgrading to the Standard tier is presented as the most effective first step to ensure those requirements are met. WHY OTHER OPTIONS ARE INCORRECT: * **A. Turn on Auto Provisioning in Security Center:** Auto-provisioning is a feature *within* Security Center, not a prerequisite for meeting basic security operational requirements. It's likely helpful *after* a sufficient security baseline is established. * **B. Integrate Security Center and Microsoft Cloud App Security:** While integration with other security tools is important for a comprehensive security posture, it's not the *first* step. A foundational level of security needs to be established within Security Center itself before integration with other services. * **D. Modify the Security Center workspace configuration:** Workspace configuration is also a step that would likely follow the initial establishment of a proper security baseline, rather than preceding it. NOTE: The discussion reveals some disagreement regarding the original question's context. One user suggests the question might be outdated, as the free/standard pricing tiers have changed. Another user clarifies that even within the free tier, some basic OS security enhancements are possible. However, the consensus leans towards upgrading to the standard tier for more comprehensive security operation capabilities.
220
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46799-exam-az-500-topic-2-question-22-discussion/) You have a hybrid configuration of Azure Active Directory (Azure AD) that has Single Sign-On (SSO) enabled. You have an Azure SQL Database instance that is configured to support Azure AD authentication. Database developers must connect to the database instance from their domain-joined device and authenticate by using their on-premises Active Directory account. You need to ensure that developers can connect to the instance by using Microsoft SQL Server Management Studio. The solution must minimize authentication prompts. Which authentication method should you recommend? A. Active Directory - Password B. Active Directory - Universal with MFA support C. SQL Server Authentication D. Active Directory - Integrated **
** D. Active Directory - Integrated **Explanation:** The question specifies a hybrid Azure AD configuration with SSO enabled and requires minimizing authentication prompts for database developers connecting from domain-joined devices using their on-premises Active Directory accounts. Active Directory - Integrated authentication leverages the user's already established Windows domain credentials. Since SSO is enabled, the user will be authenticated implicitly when they use the Integrated method, minimizing prompts. The discussion highlights this, with user cannibalcorpse correctly pointing out that "Active Directory - Integrated" is used with Azure AD credentials from a federated domain (which is implied by the SSO setup). user teehex reinforces this, mentioning Microsoft's recommendation to use Integrated Authentication in an SSO scenario. **Why other options are incorrect:** * **A. Active Directory - Password:** This requires the user to explicitly enter their credentials, contradicting the requirement to minimize authentication prompts. * **B. Active Directory - Universal with MFA support:** While this method uses Azure AD, it likely still involves additional authentication steps (MFA) beyond what's necessary, again violating the prompt minimization requirement. It's also not optimal in a hybrid setup with SSO because it doesn't leverage the already authenticated user. * **C. SQL Server Authentication:** This uses SQL Server's own authentication system, bypassing Azure AD entirely and thus is not applicable given the requirement to use the on-premises Active Directory account. **Note:** The provided discussion shows a consensus on the correct answer, but there is no explicit disagreement or conflicting opinions present.
221
[View Question](https://www.examtopics.com/discussions/databricks/view/46800-exam-az-500-topic-15-question-2-discussion/) HOTSPOT - Which virtual networks in Sub1 can User9 modify and delete in their current state? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0032400001.jpg) Scenario: Sub1 contains six resource groups named RG1, RG2, RG3, RG4, RG5, and RG6. User9 creates the virtual networks shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0032600001.png) Sub1 contains the locks shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0032600002.png) Note: As an administrator, you may need to lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to CanNotDelete or ReadOnly. In the portal, the locks are called Delete and Read-only respectively. ✑ CanNotDelete means authorized users can still read and modify a resource, but they can't delete the resource. ✑ ReadOnly means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.
**Box 1: VNET4 only** **Box 2: VNET4 only** The correct answer is VNET4 only for both boxes. This is because only RG4 has no locks applied to it. VNET4 is located in RG4. A "Delete" lock (CanNotDelete) prevents deletion but allows modification, while a "ReadOnly" lock prevents both modification and deletion. Since User9 needs to be able to *modify* and *delete*, only VNET4 in the unlocked RG4 meets this criteria. **Why other options are incorrect:** The discussion highlights disagreement on the interpretation of the provided data, particularly regarding RG3. Some argue that it should be included as it only has a delete lock, implying modification is still possible. However, the provided information states that only RG4 is without locks. The consensus among several respondents is that the provided answer is correct based on the understanding that a "Delete" lock still allows modification and therefore should not be counted to limit the user's ability. It is crucial to note that ITFranz points out that a resource group cannot have both delete and read-only locks simultaneously in Azure, which seems to indicate a potential typo in the original question's data. This disagreement and potential error should be noted.
222
[View Question](https://www.examtopics.com/discussions/databricks/view/46802-exam-az-500-topic-13-question-5-discussion/) HOTSPOT - You are evaluating the security of VM1, VM2, and VM3 in Sub2. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0022200001.png)
VM1: Yes, VM2: No, VM3: Yes. The image shows a network diagram with three VMs (VM1, VM2, VM3) in subnet Sub2. NSG1 and NSG2 are applied to different VMs. The question assesses whether inbound traffic on port 80 is allowed to each VM, based on the applied NSGs. * **VM1:** NSG2 is applied, and the image shows NSG2 allows inbound traffic on port 80. Therefore, the statement is true. * **VM2:** Both NSG1 and NSG2 are applied to VM2. While NSG2 allows inbound traffic on port 80, NSG1 blocks it. Network Security Groups are processed sequentially and any rule denying access will supersede a rule allowing access. Therefore, the overall effect is that inbound traffic on port 80 is blocked. The statement is false. * **VM3:** No NSGs are applied to VM3. In the absence of any NSG rules, all ports are open by default. Therefore, the statement is true. The provided suggested answer and discussion confirm this analysis. There is consensus among the users that this is the correct solution.
223
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46804-exam-az-500-topic-12-question-2-discussion/) You need to ensure that users can access VM0. The solution must meet the platform protection requirements. What should you do? A. Move VM0 to Subnet1. B. On Firewall, configure a network traffic filtering rule. C. Assign RT1 to AzureFirewallSubnet. D. On Firewall, configure a DNAT rule. **
** A. Move VM0 to Subnet1. **Explanation:** The suggested answer is A, but the discussion highlights significant disagreement and outdated information. The original question likely stemmed from a scenario where network connectivity to VM0 was restricted due to security policies. Moving VM0 to Subnet1 implies that Subnet1 has the necessary network access permissions and security rules configured to allow access to VM0. However, the discussion points out that the Just-in-Time (JIT) access feature in Azure Firewall now automates much of the manual configuration (DNAT rule creation), rendering the original question and answer potentially obsolete. The comments suggest that simply moving the VM (option A) might not address the underlying issue of asymmetric traffic (as indicated by user kakakayayaya), and configuring a network traffic filtering rule (option B) would likely be the more accurate solution. Option C is irrelevant, and D is now mostly automated by Azure's JIT functionality. **Why other options are incorrect:** * **B. On Firewall, configure a network traffic filtering rule:** While this is a valid security practice, it is not the most direct solution if Subnet1 already has the correct access permissions. The discussion indicates this might be more accurate than moving the VM but it is not the original suggested answer. * **C. Assign RT1 to AzureFirewallSubnet:** This option is not directly relevant to ensuring access to VM0. It might relate to routing tables but does not address the core access problem. * **D. On Firewall, configure a DNAT rule:** This is outdated in light of the Just-in-Time (JIT) access feature within Azure Firewall, which may now handle DNAT rule creation automatically. It would thus be an unnecessary manual step. **Note:** The question and suggested answer seem outdated due to the evolution of Azure Firewall features. The discussion clearly indicates disagreement on the best approach and highlights that the originally suggested answer (A) may not be the best or even a correct solution given modern Azure Firewall capabilities.
224
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46805-exam-az-500-topic-12-question-1-discussion/) DRAG DROP - You need to deploy AKS1 to meet the platform protection requirements. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0020300001.png) *(Image shows a drag-and-drop interface with actions: Create a server application, Create a client application, Deploy an AKS cluster, Create an RBAC binding.)* **
** The correct sequence of actions, based on the provided text, is: 1. Create a server application; 2. Create a client application; 3. Deploy an AKS cluster; 4. Create an RBAC binding. **Explanation:** The provided text outlines a process for Azure AD integration with AKS clusters which involves creating server and client applications before deploying the AKS cluster and establishing RBAC bindings. This order ensures that the necessary authentication components are in place before deploying the cluster and configuring access control. **Why other options are incorrect:** The question states that multiple orders might be correct, but the sequence outlined above follows the logical steps as explained in the provided text. Any deviation from this order would likely result in deployment failures. For example, deploying the cluster before creating the authentication applications would prevent Azure AD authentication from functioning correctly. Similarly, deploying the cluster before the RBAC bindings are in place would leave the cluster insecure. **Note:** The discussion highlights that this question might be outdated, as newer AKS versions offer simplified Azure AD integration without requiring manual creation of server and client applications. The provided answer is correct *based on the provided text and the legacy method described*; modern AKS implementations would use a significantly streamlined process. The suggested answer is valid only until December 1st, 2023, after which the legacy approach will be fully deprecated according to TheProfessor's comment.
225
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46852-exam-az-500-topic-2-question-14-discussion/) You have an Azure subscription. You create an Azure web app named Contoso1812 that uses an S1 App Service plan. You plan to create a CNAME DNS record for www.contoso.com that points to Contoso1812. You need to ensure that users can access Contoso1812 by using the https://www.contoso.com URL. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Turn on the system-assigned managed identity for Contoso1812. B. Add a hostname to Contoso1812. C. Scale out the App Service plan of Contoso1812. D. Add a deployment slot to Contoso1812. E. Scale up the App Service plan of Contoso1812. F. Upload a PFX file to Contoso1812. **
** B and F * **B. Add a hostname to Contoso1812:** This allows you to map the custom domain `www.contoso.com` to your Azure web app, Contoso1812. This step is crucial for users to access the web app using the custom domain. Without adding the hostname, the CNAME record will not work correctly. * **F. Upload a PFX file to Contoso1812:** This uploads the SSL/TLS certificate required to enable HTTPS access. The `https://` in the URL `https://www.contoso.com` signifies that secure HTTPS connection is required. Without the certificate, users will not be able to connect securely via HTTPS. **Why other options are incorrect:** * **A. Turn on the system-assigned managed identity for Contoso1812:** Managed identities are used for authentication to other Azure services, not for configuring custom domains or HTTPS. * **C. Scale out the App Service plan of Contoso1812:** Scaling out adds more instances of the web app. This is relevant for handling increased traffic but not directly needed for configuring the custom domain or HTTPS. * **D. Add a deployment slot to Contoso1812:** Deployment slots provide a way to deploy and test updates to your web app without affecting the production environment. This is unrelated to custom domains or HTTPS. * **E. Scale up the App Service plan of Contoso1812:** Scaling up increases the resources (CPU, memory, etc.) of each web app instance. This is about performance, not domain mapping or HTTPS. There is a consensus among the discussion participants that options B and F are the correct answers.
226
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46860-exam-az-500-topic-5-question-30-discussion/) You have an Azure subscription that contains the resources shown in the following table. ``` Resource | Type | Status --------------------------------- Vault1 | Key Vault | Enabled ``` User1 is a member of Group1. Group1 and User2 are assigned the Key Vault Contributor role for Vault1. On January 1, 2019, you create a secret in Vault1. The secret is configured as shown below: ``` Secret Name: password1 Enabled: Yes Created: 2019-01-01 Expires: 2019-06-30 ``` User2 is assigned an access policy to Vault1. The policy has the following configurations: * Key Management Operations: Get, List, and Restore * Cryptographic Operations: Decrypt and Unwrap Key * Secret Management Operations: Get, List, and Restore Group1 is assigned an access policy to Vault1. The policy has the following configurations: * Key Management Operations: Get and Recover * Secret Management Operations: List, Backup, and Recover For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. On Jan 1 2019 User 1 can view the password1. 2. On June 1 2019 User2 can view the password1. 3. On June 1 2019 User1 can view the Password1. **
** 1. **No.** User1 is a member of Group1. Group1's access policy only allows listing and recovering secrets, not getting individual secrets. Therefore, User1 cannot view "password1" on January 1, 2019. 2. **Yes.** User2 has "Get" permission for Secret Management Operations in their access policy. This allows them to retrieve the secret. Since the secret is enabled and hasn't expired yet (June 1, 2019 is before the expiration date of June 30, 2019), User2 can view "password1". 3. **No.** As explained above, Group1 (and therefore User1) lacks the "Get" permission for secrets. Even though the secret is still valid on June 1, 2019, User1 cannot access it. **Why other options are incorrect:** The analysis above explains why each statement is true or false based on the defined access policies and the secret's properties. The incorrect answers stem from a misunderstanding of the permissions granted to User1 and User2. User1 only has list and recover permissions (as part of Group1), whereas User2 has explicit "Get" permission. **Note:** The provided discussion only shows the suggested answers and user testing results confirming the answers above. There is no indication of disagreement within the discussion.
227
[View Question](https://www.examtopics.com/discussions/databricks/view/46874-exam-az-500-topic-2-question-37-discussion/) Your network contains an on-premises Active Directory domain that syncs to an Azure Active Directory (Azure AD) tenant. The tenant contains the users shown in the following table (Image 1). The tenant contains the groups shown in the following table (Image 2). You configure a multi-factor authentication (MFA) registration policy that has the following settings: Assignments: Include: Group1; Exclude: Group2; Controls: Require Azure MFA registration; Enforce Policy: On. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image 3 shows the statements to evaluate as Yes/No) **Image 1 (User Table):** (This image is not provided in the prompt but is referenced as existing on the original website) **Image 2 (Group Table):** (This image is not provided in the prompt but is referenced as existing on the original website) **Image 3 (Statements):** (This image is not provided in the prompt but is referenced as existing on the original website and is crucial for answering the question. It would show a table of statements requiring a Yes/No answer.)
Yes-No-Yes. User1 is a member of Group1, which is included in the policy, thus requiring MFA registration. User2 is a member of Group2, which is explicitly excluded from the policy, meaning they are not required to register for MFA. User3 is a member of Group1, and therefore subject to the MFA registration requirement. The suggested answer (Yes-No-Yes) aligns with the policy configuration and group memberships. The discussion supports this conclusion, with users indicating that User2 is the exception due to the exclusion from the policy. There is agreement in the discussion on the correct answer.
228
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46885-exam-az-500-topic-2-question-45-discussion/) You have the Azure virtual machines shown in the following table. | VM Name | Virtual Network | Subnet | |---|---|---| | VM1 | VNet1 | Subnet1 | | VM2 | VNet1 | Subnet2 | | VM3 | VNet1 | Subnet3 | | VM4 | VNet2 | Subnet4 | | VM5 | VNet3 | Subnet5 | Each virtual machine has a single network interface. You add the network interface of VM1 to an application security group named ASG1. You need to identify the network interfaces of which virtual machines you can add to ASG1. What should you identify? A. VM2 only B. VM2 and VM3 only C. VM2, VM3, VM4, and VM5 D. VM2, VM3, and VM5 only **
** B. VM2 and VM3 only The correct answer is B because only VM2 and VM3 reside in the same virtual network (VNet1) as VM1. Application Security Groups (ASGs) operate within a single virtual network. Adding a network interface to an ASG requires that the interface exists within the same virtual network as the first interface added to that ASG. Since VM1's network interface was the first added to ASG1 and it's in VNet1, only VM2 and VM3 (also in VNet1) can be added. **Why other options are incorrect:** * **A. VM2 only:** This is incorrect because it excludes VM3, which is also in the same virtual network as VM1 and therefore eligible to be added to ASG1. * **C. VM2, VM3, VM4, and VM5:** This is incorrect because VM4 and VM5 are in different virtual networks (VNet2 and VNet3 respectively) than VM1. * **D. VM2, VM3, and VM5 only:** This is incorrect because VM5 is in VNet3, a different virtual network than VM1. There is a consensus among the discussion participants that the correct answer is B.
229
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46888-exam-az-500-topic-2-question-7-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains the users shown in the following table. | User | Group1 | Group2 | Sign-in Risk Level | | ------- | ------- | ------- | ------------------- | | User1 | Yes | No | Medium | | User2 | Yes | No | Medium | | User3 | No | Yes | Low | You create and enforce an Azure AD Identity Protection user risk policy that has the following settings: * Assignment: Include Group1, Exclude Group2 * Conditions: Sign-in risk of Medium and above * Access: Allow access, Require password change For each of the following statements, select Yes if the statement is true. Otherwise, select No. **Statement 1:** User1 will be prompted to change their password after signing in. **Statement 2:** User2 will be prompted to change their password after signing in. **Statement 3:** User3 will be prompted to change their password after signing in.
* **Statement 1: Yes** - User1 is a member of Group1, and their sign-in risk is Medium, meeting the policy conditions. The policy requires a password change. * **Statement 2: Yes** - User2 is a member of Group1, and their sign-in risk is Medium, meeting the policy conditions. The policy requires a password change. * **Statement 3: No** - User3 is a member of Group2, which is excluded by the policy. Therefore, the policy does not apply to User3, regardless of their sign-in risk level. The provided suggested answer in the original post is consistent with this analysis. However, a comment in the discussion indicates the suggested answer is considered incorrect by at least one user. There is no further detail given to clarify the discrepancy or suggest an alternative answer.
230
[View Question](https://www.examtopics.com/discussions/databricks/view/46906-exam-az-500-topic-2-question-50-discussion/) You have an Azure subscription that contains the users shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0014000001.png) Which users can enable Azure AD Privileged Identity Management (PIM)? A. User2 and User3 only B. User1 and User2 only C. User2 only D. User1 only
The correct answer is **A. User2 and User3 only**. Explanation: To enable Azure AD Privileged Identity Management (PIM), a user needs Global Administrator rights and the ability to complete Multi-Factor Authentication (MFA). The image (not provided here but referenced in the original question) presumably shows User2 and User3 as having Global Administrator roles. While the initial answer suggestion was incorrect, the discussion clarifies that even if MFA isn't initially enabled, the process of enabling PIM prompts the user to set it up. Therefore, both User2 and User3, possessing the necessary administrative privileges, could successfully enable PIM after completing the MFA setup. Why other options are incorrect: * **B. User1 and User2 only:** User1 lacks Global Administrator rights, preventing PIM enablement. * **C. User2 only:** This omits User3, who also possesses the necessary Global Administrator role. * **D. User1 only:** User1 lacks the required Global Administrator role. Note: The discussion highlights a disagreement with the initial suggested answer, indicating the complexities in determining who can enable PIM based on incomplete information about user roles and MFA statuses. The explanation provided above takes the discussion into account to clarify the correct response.
231
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46912-exam-az-500-topic-2-question-53-discussion/) You have the hierarchy of Azure resources shown in the following exhibit. (Image shows RG1, RG2, and RG3 resource groups. RG2 contains VM2.) You assign role-based access control (RBAC) roles to the users shown in the following table. (Image shows User1 with "Owner" role on RG1, User2 with "Virtual Machine Contributor" role on RG2, User3 with "Virtual Machine Administrator Login" role on VM2.) For each of the following statements, select Yes if the statement is true. Otherwise, select No. (Image shows three statements): 1. User1 can delete RG2. 2. User2 can restart VM2. 3. User3 can delete VM2. **
** 1. **Yes:** User1 has the "Owner" role on RG1, which encompasses RG2. Owners have full access and can delete resources within their scope. 2. **Yes:** User2 has the "Virtual Machine Contributor" role on RG2, which contains VM2. The "Virtual Machine Contributor" role allows for restarting virtual machines. 3. **No:** User3 has the "Virtual Machine Administrator Login" role *on VM2 itself*, not on the resource group. This role provides administrative access *to the VM*, allowing login and management of the VM's operating system, but it does *not* grant permissions to delete the VM from Azure. **Why other options are incorrect:** The explanation above details why each answer is correct or incorrect based on the defined Azure RBAC roles and their permissions. The discussion suggests consensus on the provided answers. There is no evidence of conflicting opinions in the provided discussion.
232
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46939-exam-az-500-topic-2-question-20-discussion/) You have an Azure subscription named Sub1. You have an Azure Storage account named sa1 in a resource group named RG1. Users and applications access the blob service and the file service in sa1 by using several shared access signatures (SASs) and stored access policies. You discover that unauthorized users accessed both the file service and the blob service. You need to revoke all access to sa1. Solution: You regenerate the Azure storage account access keys. Does this meet the goal? A. Yes B. No **
** A. Yes Regenerating the Azure storage account access keys will revoke access for any SAS tokens based on those keys (Account SAS). This effectively removes access for all users and applications that relied on these keys. Therefore, this solution meets the goal of revoking all access to `sa1`. **Why other options are incorrect:** B. No is incorrect because regenerating the storage account keys invalidates SAS tokens tied directly to those keys. While it might not invalidate all access (as noted in the discussion), it does invalidate a significant portion, making it a valid solution in many scenarios. **Note:** The discussion highlights that regenerating keys might *not* revoke access granted via Service SAS or User Delegation SAS tokens, which are not tied directly to the account keys. This means the effectiveness of this solution depends on the type of SAS tokens used. While the provided solution is generally accepted as a way to revoke access for many SAS tokens, it is not guaranteed to revoke *all* access in every situation.
233
[View Question](https://www.examtopics.com/discussions/databricks/view/46942-exam-az-500-topic-2-question-26-discussion/) You have an Azure subscription named Sub 1 that is associated with an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant contains the users shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0009400002.png) Each user is assigned an Azure AD Premium P2 license. You plan to onboard and configure Azure AD Identity Protection. Which users can onboard Azure AD Identity Protection, remediate users, and configure policies? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0009500001.png)
Only the Global Administrator can onboard Azure AD Identity Protection. Both the Global Administrator and the Security Administrator can remediate users and configure policies. The provided images show a multiple choice question and a suggested answer. The suggested answer highlights the Global Administrator and Security Administrator as having the necessary permissions. This aligns with the discussion which states that the Global Administrator has full access, including onboarding, while the Security Administrator has full access to remediate and configure policies, but not to onboard the service. A Security Reader only has view-only access. The discussion notes that the Security Administrator cannot reset user passwords, a detail not directly relevant to the question but included for completeness. There is no disagreement expressed in the discussion regarding the core answer.
234
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46949-exam-az-500-topic-4-question-47-discussion/) HOTSPOT - You have 20 Azure subscriptions and a security group named Group1. The subscriptions are children of the root management group. Each subscription contains a resource group named RG1. You need to ensure that for each subscription RG1 meets the following requirements: ✑ The members of Group1 are assigned the Owner role. ✑ The modification of permissions to RG1 is prevented. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** The provided text only gives the question and the suggested answer image (which is missing here and therefore cannot be described). The discussion mentions "Blueprints" as a possible solution. Without the content of the image depicting the suggested answer options, a complete answer cannot be given. To answer the question correctly, one would need to select the options from the image that correctly describe the use of Azure Blueprints to assign the Owner role to Group1 for each RG1 across the subscriptions and prevent further permission modifications. This would involve creating a blueprint that defines the role assignments and potentially utilizes a lock to prevent permission changes. **Why other options are incorrect:** This cannot be determined without the image showing the multiple-choice options. The discussion provides limited insight, only suggesting blueprints as a possible solution, without explaining why other potential options would be incorrect.
235
[View Question](https://www.examtopics.com/discussions/databricks/view/46968-exam-az-500-topic-4-question-12-discussion/) You need to collect all the audit failure data from the security log of a virtual machine named VM1 to an Azure Storage account. To complete this task, sign in to the Azure portal. This task might take several minutes to complete. You can perform other tasks while the task completes. What is the correct procedure?
The correct procedure is to configure diagnostic settings on the VM to send audit failure logs to a storage account. This involves navigating to the VM's diagnostic settings in the Azure portal, enabling guest-level monitoring (if not already enabled), and selecting the "Audit Failure" log under the Logs tab to be sent to a storage account. The suggested answer in the original post detailing the use of Log Analytics workspaces is incorrect, as confirmed by multiple users in the discussion. There is a consensus among the users that the Log Analytics method is not the appropriate solution for this scenario. Why other options are incorrect: The provided solution in the original post suggesting the use of Log Analytics workspaces is incorrect. The discussion clearly indicates that the correct method is to configure diagnostic settings directly on the VM to send the desired logs to the storage account.
236
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46978-exam-az-500-topic-3-question-9-discussion/) You create resources in an Azure subscription as shown in the following table. VNET1 contains two subnets named Subnet1 and Subnet2. Subnet1 has a network ID of 10.0.0.0/24. Subnet2 has a network ID of 10.1.1.0/24. Contoso1901 is configured as shown in the exhibit. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Statement 1: Access from Subnet1 is allowed. Statement 2: Access from Subnet2 is allowed. Statement 3: Access from IP address 193.77.10.2 is allowed. **
** Statement 1: Yes Statement 2: No Statement 3: Yes **Explanation:** The configuration shows that Contoso1901 allows access from Subnet1 via a Virtual Network rule. The IP rules allow access from the 193.77.0.0/16 range, which includes 193.77.10.2. Subnet2 is not explicitly allowed, and the default action is deny. Therefore, only Subnet1 and the specified IP address are permitted access. The discussion shows some disagreement on statement 3, with some users initially believing that the default "deny" action would override the IP rule match; however, the correct interpretation is that a matching rule takes precedence over the default deny rule. **Why other options are incorrect:** There's a debate in the discussion regarding statement 3, with some users initially misunderstanding the interaction between the specific IP rule and the default deny rule. However, the consensus and the suggested answer confirm that a matching rule overrides the default deny action. Therefore, access from 193.77.10.2 *is* allowed.
237
**** [View Question](https://www.examtopics.com/discussions/databricks/view/46983-exam-az-500-topic-3-question-10-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. | VM Name | VNET | |---|---| | VM1 | VNET1 | | VM2 | VNET2 | | VM3 | VNET3 | | VM4 | VNET4 | All the virtual networks are peered. You deploy Azure Bastion to VNET2. Which virtual machines can be protected by the bastion host? A. VM1, VM2, VM3, and VM4 B. VM1, VM2, and VM3 only C. VM2 and VM4 only D. VM2 only **
** A. VM1, VM2, VM3, and VM4 **Explanation:** Because all the virtual networks (VNETs) are peered, the Azure Bastion host deployed in VNET2 can access and protect all virtual machines in the peered VNETs (VNET1, VNET3, and VNET4) in addition to the VMs in its own VNET (VNET2). VNet peering allows resources in one VNET to communicate with resources in another as if they were in the same network. Therefore, the Bastion host, acting as a jump server, can establish connections to all the VMs regardless of their underlying VNET. **Why other options are incorrect:** * **B. VM1, VM2, and VM3 only:** This is incorrect because it omits VM4, which is also accessible through VNet peering. * **C. VM2 and VM4 only:** This is incorrect because it excludes VM1 and VM3. Bastion's access isn't limited by the deployment location within a set of peered networks. * **D. VM2 only:** This is incorrect as it only accounts for the VMs within the VNET where the Bastion host is deployed, ignoring the benefits of VNet peering. **Note:** The discussion section shows overwhelming agreement on the correct answer.
238
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47003-exam-az-500-topic-3-question-14-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0024100001.png) The Azure virtual machines on SpokeVNetSubnet0 can communicate with the computers on the on-premises network. You plan to deploy an Azure firewall to HubVNet. You create the following two routing tables: * RT1: Includes a user-defined route that points to the private IP address of the Azure firewall as a next hop address * RT2: Disables BGP route propagation and defines the private IP address of the Azure firewall as the default gateway You need to ensure that traffic between SpokeVNetSubnet0 and the on-premises network flows through the Azure firewall. To which subnet should you associate each route table? To answer, drag the appropriate subnets to the correct route tables. Each subnet may be used once, more than once, or not at all. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0024200001.jpg) **
** RT1 should be associated with SpokeVNetSubnet0 and GatewaySubnet. RT2 should be associated with SpokeVNetSubnet0. **Explanation:** The goal is to force all traffic from SpokeVNetSubnet0 to the on-premises network through the Azure firewall. To achieve this: * **RT2 (SpokeVNetSubnet0):** This route table disables BGP route propagation. BGP would otherwise send traffic through the VPN Gateway, bypassing the firewall. By setting the Azure Firewall's private IP as the default gateway (0.0.0.0/0), all traffic originating from SpokeVNetSubnet0 will be directed to the firewall. * **RT1 (SpokeVNetSubnet0 and GatewaySubnet):** While RT2 handles outbound traffic from SpokeVNetSubnet0, RT1 handles the return traffic. The on-premises network will send traffic back to the Azure Firewall. RT1 directs that traffic to SpokeVNetSubnet0 via the Azure Firewall. The GatewaySubnet needs this route to ensure traffic destined for the Spoke VNet can be routed back through the firewall. **Why other options are incorrect:** The core problem is preventing the default VPN gateway routing. Any solution that doesn't disable BGP propagation on SpokeVNetSubnet0 and direct all traffic to the firewall will be incorrect. Associating RT1 with only SpokeVNetSubnet0 would leave the return path from the on-premises network unresolved. **Note:** The discussion shows some disagreement and confusion regarding the solution. However, the suggested answer and the provided explanation above align with the standard Azure networking configurations for achieving this type of firewall integration.
239
[View Question](https://www.examtopics.com/discussions/databricks/view/47005-exam-az-500-topic-3-question-17-discussion/) You have an Azure subscription that contains the Azure virtual machines shown in the following table. | VM Name | OS | Version | |---------|--------------|---------------| | VM1 | Windows 10 | 20H2 | | VM2 | Windows Server 2019 | 1809 | | VM3 | Windows 10 | 21H2 | | VM4 | Linux | | You create an MDM Security Baseline profile named Profile1. You need to identify to which virtual machines Profile1 can be applied. Which virtual machines should you identify? A. VM1 only B. VM1, VM2, and VM3 only C. VM1 and VM3 only D. VM1, VM2, VM3, and VM4
A. VM1 only Explanation: MDM (Mobile Device Management) Security Baseline profiles, as discussed in the provided text, are applicable only to devices running Windows 10 (version 1809 or later). VM1 is the only machine meeting this criteria. VM2 is a Windows Server, and VM4 is Linux; neither are compatible with MDM Security Baselines. Although VM3 also runs Windows 10, it's important to consider that the question specifically asks which VMs *can* be applied to - it does not imply all those capable of running Windows 10 would have the baseline applied. Therefore, only VM1 is definitively identified. Why other options are incorrect: * **B. VM1, VM2, and VM3 only:** Incorrect because VM2 (Windows Server) and VM4 (Linux) are not compatible with MDM Security Baselines. * **C. VM1 and VM3 only:** Incorrect because it omits the crucial element of compatibility with the MDM Security Baselines, based solely on operating system versions. * **D. VM1, VM2, VM3, and VM4:** Incorrect as it includes VM2 and VM4, which are incompatible with MDM Security Baselines. Note: There is some disagreement in the discussion regarding Intune's role and its current relevance to the exam. However, the consensus points towards the correct answer based on the understanding that MDM security baselines are primarily designed for Windows 10 devices.
240
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47017-exam-az-500-topic-3-question-24-discussion/) You have an Azure subscription that contains an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant contains the users shown in the following table. | User | Role | |------|--------------------| | U1 | Owner | | U2 | Contributor | | U3 | Reader | You create a resource group named RG1. Which users can modify the permissions for RG1 and which users can create virtual networks in RG1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** * **Modify permissions for RG1:** Only U1 (Owner). * **Create virtual networks in RG1:** U1 (Owner) and U2 (Contributor). **Explanation:** In Azure Role-Based Access Control (RBAC), the Owner role has full access to manage all aspects of a resource group, including permissions. The Contributor role can manage and modify resources within a resource group but *cannot* modify permissions. Therefore, only the user with the Owner role (U1) can modify permissions for RG1. Both the Owner and Contributor roles have the permissions necessary to create virtual networks, so both U1 and U2 can perform this action. **Why other options are incorrect:** The discussion shows some disagreement, with some users suggesting that a Service Administrator might have equivalent permissions to an Owner. However, the question is explicitly based on the provided table of users and their roles within the Azure AD tenant, so external roles or administrative privileges are not considered in the context of this specific question. Therefore, only the roles explicitly assigned to U1 and U2 are relevant to answering the question. The suggested answer, which is supported by the highly upvoted responses, aligns with the standard Azure RBAC permissions.
241
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47019-exam-az-500-topic-3-question-29-discussion/) You have a file named File1.yaml that contains the following contents: ```yaml apiVersion: 2019-12-01 location: eastus name: container1 properties: imageRegistryCredential: server: myregistry.azurecr.io username: myuser containers: - name: mycontainer image: myregistry.azurecr.io/myimage:v1 resources: requests: memory: 1Gi cpu: 1 environmentVariables: - name: Variable1 value: TestValue - name: Variable2 secureValue: SecureValue ``` You create an Azure container instance named container1 by using File1.yaml. You need to identify where you can access the values of Variable1 and Variable2. What should you identify? NOTE: Each correct selection is worth one point. **
** Variable1 can be accessed from inside the container1 and the Azure portal (or Azure CLI). Variable2 can only be accessed from inside container1. **Explanation:** The `value` parameter in the environment variable definition for Variable1 makes it a non-secure value. Non-secure environment variables are accessible from both within the container and through the Azure portal or Azure CLI. The `secureValue` parameter used for Variable2 designates it as a secure value. Secure environment variables are only accessible from within the container itself; their values are not exposed in the Azure portal or Azure CLI. This is confirmed by the provided documentation link and numerous user comments in the discussion section. **Why other options are incorrect:** There are no explicitly stated incorrect options in the original question, only the correct options to select. However, any answer suggesting access to Variable2 outside the container would be incorrect based on the definition of `secureValue`. The discussion shows a consensus on this point, though the initial question doesn't provide a list of options from which to choose.
242
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47025-exam-az-500-topic-3-question-31-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. | VM Name | Subnet | Public IP Address | |---|---|---| | VM1 | Subnet1 | 13.93.103.17 | | VM2 | Subnet2 | 104.215.163.143 | | VM3 | Subnet2 | None | Subnet1 and Subnet2 have a Microsoft.Storage service endpoint configured. You have an Azure Storage account named `storageacc1` that is configured as shown in the following exhibit. (Note: The image showing the storage account configuration is not provided in the text, only a description that the allowed virtual network list is empty.) For each of the following statements, select Yes if the statement is true. Otherwise, select No. * Can VM1 access storageacc1? * Can VM2 access storageacc1? * Can VM3 access storageacc1? **
** * **VM1:** Yes. VM1 has a public IP address (13.93.103.17) and the storage account's firewall allows access from this public IP. * **VM2:** No. VM2 has a public IP address but it is *not* allowed through the storage account's firewall. The allowed virtual network list for `storageacc1` is empty, preventing direct access from the virtual network. Therefore, access via public IP is blocked. * **VM3:** No. VM3 does not have a public IP address. Because the storage account has no allowed virtual networks and requires a public IP or a virtual network to be allowed for access, VM3 cannot access `storageacc1`. **Explanation:** The question tests understanding of Azure Storage account network security and service endpoints. While service endpoints allow VMs within the subnet to access storage resources using private IP addresses, this only works if the storage account's firewall allows it (which it does not in this case). The provided storage account configuration shows an empty list of allowed virtual networks. Therefore, only VM1 can connect directly to `storageacc1` because only it possesses a public IP address permitted through the firewall settings. **Why other options are incorrect:** The discussion shows disagreement among users regarding whether the service endpoint configuration should affect the outcome. Some users incorrectly assume the service endpoint automatically grants access, ignoring the storage account's firewall rules. The correct answer prioritizes the explicitly configured firewall rules of the storage account.
243
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47046-exam-az-500-topic-3-question-53-discussion/) You have an Azure subscription that contains a storage account named storage1 and several virtual machines. The storage account and virtual machines are in the same Azure region. The network configurations of the virtual machines are shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0029400001.png) The virtual network subnets have service endpoints defined as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0029400002.png) You configure the following Firewall and virtual networks settings for storage1: ✑ Allow access from: Selected networks ✑ Virtual networks: VNET3\Subnet3 Firewall `" Address range: 52.233.129.0/24 ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0029400005.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Can VM1 connect to storage1? Can VM2 connect to storage1? Can VM3 connect to storage1? **
** No, Yes, No * **VM1:** No. While VNet1 has a service endpoint for Azure Storage, storage1's firewall only allows access from VNET3\Subnet3 and the IP address range 52.233.129.0/24. VM1's public IP address is not within this range, and VNet1 is not listed as an allowed network. Therefore, VM1 cannot connect. * **VM2:** Yes. VNet2 does not have a service endpoint configured for Azure Storage. However, storage1 allows access from the IP address range 52.233.129.0/24. Since VM2's public IP address (52.233.129.10) falls within this range, it can connect. * **VM3:** No. Although VNet3/Subnet3 is explicitly allowed in storage1's firewall settings, and a service endpoint for storage is mentioned in the image, the explicit configuration shown in the images only shows a Key Vault service endpoint for Subnet3. The question does not explicitly state that a storage service endpoint was configured for Subnet3, which, according to some users in the discussion, is a requirement for connectivity in addition to the firewall rules. Therefore, based on the given configuration VM3 cannot connect. **Explanation of Disagreement:** The discussion section shows disagreement regarding the necessity of a service endpoint for VM3 to access storage1, even with the subnet being explicitly allowed in the storage account's network settings. Some users report that access was successful only after enabling a storage service endpoint for Subnet3, while others claim it shouldn't be necessary given the specified firewall settings. The answer provided reflects the more conservative interpretation, prioritizing the information explicitly stated in the provided screenshots.
244
[View Question](https://www.examtopics.com/discussions/databricks/view/47049-exam-az-500-topic-3-question-54-discussion/) You plan to create an Azure Kubernetes Service (AKS) cluster in an Azure subscription. The manifest of the registered server application is shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0029700001.png) You need to ensure that the AKS cluster and Azure Active Directory (Azure AD) are integrated. Which property should you modify in the manifest? A. accessTokenAcceptedVersion B. keyCredentials C. groupMembershipClaims D. acceptMappedClaims
C. `groupMembershipClaims` To integrate Azure AD with an AKS cluster, you need to configure the Azure AD application used for authentication. The `groupMembershipClaims` property within the application's manifest controls how group memberships are represented in tokens issued by Azure AD. Setting this property to "All" ensures that the user's group memberships are included in the access tokens, allowing AKS to leverage this information for authorization and access control. The provided discussion and linked documentation strongly support this. Why other options are incorrect: * **A. `accessTokenAcceptedVersion`**: This property relates to the version of access tokens accepted by the application, not directly to Azure AD integration for AKS. * **B. `keyCredentials`**: This pertains to the application's authentication keys, not the integration with Azure AD for user authorization and group membership within AKS. * **D. `acceptMappedClaims`**: While relevant to claims processing in Azure AD, it's not the primary property for enabling Azure AD group integration with AKS. `groupMembershipClaims` directly controls the inclusion of group memberships in the access token, which is crucial for AKS integration. Note: While the consensus in the discussion points to C as the correct answer, one user mentions the question being outdated. Therefore, while the answer provided is based on the current best practice and information available, it's important to consider that evolving technology may change the most effective approach.
245
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47056-exam-az-500-topic-3-question-55-discussion/) You have the Azure virtual networks shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0029800001.png) You have the Azure virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0029800002.png) The firewalls on all the virtual machines allow ping traffic. NSG1 is configured as shown in the following exhibit. Inbound security rules - ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0029900001.png) Outbound security rules - ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0029900002.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0030000001.png) **
** Box 1: Yes Box 2: No Box 3: Yes **Explanation:** * **Box 1:** VM1 and VM3 reside on peered VNets (VNet1 and VNet2). The NSG rules (specifically, rules allowing 'Any' traffic from ASG1 and ASG2) permit ICMP traffic (ping) between them. Therefore, VM1 can ping VM3. * **Box 2:** VM2 and VM4 are on separate, *unpeered* VNets. For them to communicate, they would need public IP addresses. While VM4 has a public IP and its firewall allows pings, VM2 does not have a public IP. In Azure, VMs need a public IP to ping external machines; they don't route pings through the default gateway like in traditional networks. Therefore, VM2 cannot ping VM4. * **Box 3:** VM3 possesses a public IP address, and the NSG rules allow inbound traffic on port 3389 (RDP). Hence, external connections to VM3 via RDP are possible. **Why other options are incorrect:** The discussion shows some disagreement, particularly regarding Box 2. Some argue that because VM4 only needs to reply to a ping, a public IP on VM2 isn't strictly necessary. However, the prevailing and more accurate interpretation, supported by the suggested answer and most discussion participants, is that both VMs require public IPs for communication across unpeered VNets. The key is understanding Azure networking behavior differs from traditional networks regarding ping traffic across VNets.
246
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47058-exam-az-500-topic-4-question-71-discussion/) You plan to connect several Windows servers to the WS12345678 Azure Log Analytics workspace. You need to ensure that the events in the System event logs are collected automatically to the workspace after you connect the Windows servers. To complete this task, sign in to the Azure portal and modify the Azure resources. What are the steps to configure this? Image included in original question: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0042200001.jpg) **
** The provided discussion shows conflicting information on the exact steps to configure Log Analytics to collect Windows System event logs. There is no single definitive answer based solely on this material. However, several approaches are suggested, reflecting changes in the Azure portal interface over time. The most commonly repeated steps, incorporating elements from multiple users' suggestions, could be summarized as follows: 1. **Access Log Analytics Workspace:** Navigate to your Log Analytics workspace (WS12345678) in the Azure portal. 2. **Connect Windows Servers:** Connect the Windows servers to the workspace, using either the "Agents configuration" section (as suggested by dadageer and OhBee), "Settings -> Legacy agents management" (Amit3), or the "Workspace Data Sources -> Windows Servers" (mrt007). This step might involve creating a Data Collection Rule (DCR) as suggested by schpeter_091. 3. **Configure Windows Event Logs:** Once the servers are connected, locate the settings for Windows Event Logs. This location varies depending on the Azure portal version; it may be under "Advanced settings -> Data -> Windows Event Logs" (as shown in the image) or "Legacy Agent Management" (Macke53). 4. **Add System Event Log:** Add the "System" event log and select the desired severities (Error, Warning, and Information are recommended to ensure all logs are captured). 5. **Save Changes:** Save the configuration changes. **Why other options are incorrect (or partially correct):** The discussion highlights the evolution of the Azure portal interface. Therefore, the exact path and settings may vary depending on when the exam is taken or the Azure environment's specific configuration. Each suggested method might be correct for a certain time period or portal version, but none is universally and definitively correct according to this discussion alone.
247
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47064-exam-az-500-topic-4-question-15-discussion/) HOTSPOT - You plan to use Azure Log Analytics to collect logs from 200 servers that run Windows Server 2016. You need to automate the deployment of the Microsoft Monitoring Agent to all the servers by using an Azure Resource Manager template. How should you complete the template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0034800001.jpg) *(Image shows two boxes labeled "Box 1" and "Box 2" requiring values to be filled)* Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0034900001.jpg) *(Image shows "Box 1: workspaceId" and "Box 2: workspaceKey")* **
** Box 1: `workspaceId`; Box 2: `workspaceKey`. The suggested answer, and the majority of the discussion, points to `workspaceId` and `workspaceKey` as the correct values for the Azure Resource Manager template. The `workspaceId` identifies the Log Analytics workspace, and the `workspaceKey` provides the authentication. **Why other options are incorrect:** There are no other options explicitly presented in the question. The discussion highlights a point of contention: While the original answer and many commenters suggest using both `workspaceId` and `workspaceKey`, OrangeSG points out that the Microsoft Monitoring Agent (MMA) has largely been superseded by the Azure Monitoring Agent (AMA), which utilizes Azure AD device tokens instead of `workspaceKey`. Therefore, for current best practices, only `workspaceId` might be considered strictly necessary. However, based on the provided question and suggested answer, using both remains the most accurate answer within the context of the question and materials given. The answer should be considered valid given the context of the provided material but may be outdated regarding modern best practices.
248
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47108-exam-az-500-topic-2-question-40-discussion/) You have an Azure Active Directory (Azure AD) tenant. You have the deleted objects shown in the following table. | Object Type | Object Name | Deletion Date | |--------------|--------------|-----------------| | User | User1 | April 24, 2020 | | User | User2 | May 2, 2020 | | Group | Group1 | April 28, 2020 | | Group | Group2 | May 1, 2020 | On May 4, 2020, you attempt to restore the deleted objects by using the Azure Active Directory admin center. Which two objects can you restore? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Group1 B. Group2 C. User2 D. User1 **
** B and C (Group2 and User2) **Explanation:** Azure AD retains deleted objects for 30 days before permanently deleting them. The deletion dates for User2 (May 2nd) and Group2 (May 1st) are both within 30 days of the restoration attempt on May 4th. Therefore, both can be restored. User1 (deleted April 24th) and Group1 (deleted April 28th) are outside the 30-day window and cannot be restored. **Why other options are incorrect:** * **A (Group1):** Deleted on April 28th, which is more than 30 days before the restoration attempt on May 4th. * **D (User1):** Deleted on April 24th, which is more than 30 days before the restoration attempt on May 4th. **Note:** The discussion shows unanimous agreement on the correct answer.
249
**[View Question](https://www.examtopics.com/discussions/databricks/view/47109-exam-az-500-topic-2-question-44-discussion/)** **** You have an Azure subscription that contains an Azure Active Directory (Azure AD) tenant and a user named User1. The App registrations settings for the tenant are configured as shown in the following exhibit. You plan to deploy an app named App1. You need to ensure that User1 can register App1 in Azure AD. The solution must use the principle of least privilege. Which role should you assign to User1? **
** D. Application developer in Azure AD The Application developer role in Azure AD grants the necessary permissions to register applications, even when the "Users can register applications" setting in the tenant is disabled (as shown in the image). This aligns with the principle of least privilege, granting only the minimum required access. Options A, B, and C provide unnecessary broader access than what's needed for simply registering an application. **Why other options are incorrect:** * **A. App Configuration Data Owner for the subscription:** This role is related to managing application configurations and data, not application registration itself. It provides excessive privileges. * **B. Managed Application Contributor for the subscription:** This role manages managed applications, which is not directly relevant to registering a new application in Azure AD. It grants excessive privileges. * **C. Cloud application administrator in Azure AD:** This role offers extensive control over Azure AD, far exceeding the requirement to simply register an application. It's not the principle of least privilege. The discussion shows a unanimous agreement that option D is the correct answer.
250
[View Question](https://www.examtopics.com/discussions/databricks/view/47110-exam-az-500-topic-3-question-30-discussion/) You have an Azure subscription that contains a virtual network. The virtual network contains the subnets shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0026200002.png) The subscription contains the virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0026300001.png) You enable just in time (JIT) VM access for all the virtual machines. You need to identify which virtual machines are protected by JIT. Which virtual machines should you identify? A. VM4 only B. VM1 and VM3 only C. VM1, VM3 and VM4 only D. VM1, VM2, VM3, and VM4
C. VM1, VM3, and VM4 only. JIT VM access requires a Network Security Group (NSG) associated with either the VM's NIC or the subnet it's attached to. Examining the provided images, we see: * **VM1:** Resides in Subnet 1 which has an NSG. Therefore, it's protected by JIT. * **VM2:** Resides in Subnet 2 which does *not* have an NSG. Therefore, it's *not* protected by JIT. * **VM3:** Resides in Subnet 3 which has an NSG. Therefore, it's protected by JIT. * **VM4:** Resides in Subnet 1 which has an NSG. Therefore, it's protected by JIT. Therefore, only VM1, VM3, and VM4 meet the prerequisite for JIT VM access protection. Why other options are incorrect: * **A:** Incorrect because it excludes VM1 and VM3, which are protected by JIT. * **B:** Incorrect because it excludes VM4, which is also protected by JIT. * **D:** Incorrect because it includes VM2, which lacks the required NSG for JIT protection. Note: The discussion mentions that an Azure Firewall could also fulfill the requirement instead of an NSG. This information wasn't explicitly part of the original question or images, but is a valid consideration based on real-world Azure configurations.
251
[View Question](https://www.examtopics.com/discussions/databricks/view/47118-exam-az-500-topic-2-question-55-discussion/) You have an Azure subscription that is linked to an Azure Active Directory (Azure AD) tenant. From the Azure portal, you register an enterprise application. Which additional resource will be created in Azure AD? A. a service principal B. an X.509 certificate C. a managed identity D. a user account
A. a service principal When you register an enterprise application in Azure AD, a service principal is automatically created. The service principal acts as a security identity for the application, defining its access and permissions within the Azure AD tenant. Options B, C, and D are not automatically created upon registering an enterprise application. An X.509 certificate might be *associated* with an application later, but isn't created during registration. Managed identities are separate and require additional configuration. A user account is for individuals, not applications. There is a consensus among the discussion participants that the correct answer is A.
252
[View Question](https://www.examtopics.com/discussions/databricks/view/47119-exam-az-500-topic-3-question-60-discussion/) DRAG DROP - You have an Azure subscription named Sub1. You have an Azure Active Directory (Azure AD) group named Group1 that contains all the members of your IT team. You need to ensure that the members of Group1 can stop, start, and restart the Azure virtual machines in Sub1. The solution must use the principle of least privilege. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0030400001.png)
The correct sequence of actions is: 1. **Create a JSON file that contains the role definition:** This file will specify the permissions (stop, start, restart VMs) without granting unnecessary access. The JSON would define a custom role with only the necessary VM management actions. 2. **Run the New-AzRoleDefinition cmdlet:** This PowerShell cmdlet uses the JSON file created in step 1 to create a new custom Azure role definition within your Azure AD. 3. **Run the New-AzRoleAssignment cmdlet:** This cmdlet assigns the newly created custom role to the Group1 in the Sub1 subscription, granting the IT team members the required permissions. This approach adheres to the principle of least privilege by only granting the specific permissions needed to manage VMs, avoiding broader, potentially risky permissions. **Why other options are incorrect:** The question is a drag-and-drop, so there isn't a list of explicitly wrong options. However, performing the steps out of order or skipping any of these steps would result in failure to grant the correct permissions in a secure manner. For example, attempting to assign a role before defining it would fail. Note: The discussion shows a high degree of consensus on the answer, with multiple users confirming its correctness and providing similar solutions.
253
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47120-exam-az-500-topic-2-question-43-discussion/) You have an Azure subscription that is associated with an Azure Active Directory (Azure AD) tenant. When a developer attempts to register an app named App1 in the tenant, the developer receives the error message shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0013000001.png) You need to ensure that the developer can register App1 in the tenant. What should you do for the tenant? A. Modify the Directory properties. B. Set Enable Security defaults to Yes. C. Configure the Consent and permissions settings for enterprise applications. D. Modify the User settings. **
** D. Modify the User settings. To resolve the error and allow the developer to register App1, you need to modify the Azure AD tenant's user settings. Specifically, you must ensure that the setting "Users can register applications" is set to "Yes". The error message in the image indicates a permission issue related to application registration, directly addressed by this setting. **Why other options are incorrect:** * **A. Modify the Directory properties:** This option is incorrect because directory properties don't directly control app registration capabilities. They manage broader tenant-level settings. * **B. Set Enable Security defaults to Yes:** Enabling security defaults enhances security features like MFA but doesn't directly impact the ability to register applications. * **C. Configure the Consent and permissions settings for enterprise applications:** This option focuses on permissions *after* an application is registered, not on the ability to register it in the first place. **Note:** There is some disagreement in the discussion regarding the exact steps, with some users suggesting that the "Users can register applications" setting might need to be explicitly changed to "Yes" (and others suggesting "No"). However, the core concept remains consistent: the solution lies within the user settings of the Azure AD tenant. The likely correct answer is to set the setting to "Yes" to enable application registration for users.
254
[View Question](https://www.examtopics.com/discussions/databricks/view/47126-exam-az-500-topic-4-question-23-discussion/) HOTSPOT - You have an Azure Sentinel workspace that contains an Azure Active Directory (Azure AD) connector, an Azure Log Analytics query named Query1, and a playbook named Playbook1. Query1 returns a subset of security events generated by Azure AD. You plan to create an Azure Sentinel analytic rule based on Query1 that will trigger Playbook1. You need to ensure that you can add Playbook1 to the new rule. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0036300001.png) Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0036400001.png)
The correct answer is to select "Scheduled" as the trigger for the Azure Sentinel analytic rule. The provided images show this as the only selected option in the suggested answer. This is because to create an analytic rule based on a query (like Query1), you need to use a scheduled trigger. While other trigger types exist (like "Microsoft incident creation rule"), they are not suitable for triggering based on the results of a Log Analytics query. The playbook (Playbook1) will then execute when the scheduled query finds matching events. Several commenters on the original post confirm that this is the correct answer and that the question appeared on the AZ-500 exam on multiple occasions. Note: One commenter (Tombarc) mentions that "Microsoft incident creation rule" can be used with a custom analytic rule alongside "Schedule". However, the question specifically asks about creating a rule *based on* Query1, implying a scheduled query execution is required. There is some disagreement on this nuance within the discussion thread.
255
[View Question](https://www.examtopics.com/discussions/databricks/view/47131-exam-az-500-topic-4-question-27-discussion/) DRAG DROP - You have five Azure subscriptions linked to a single Azure Active Directory (Azure AD) tenant. You create an Azure Policy initiative named SecurityPolicyInitiative1. You identify which standard role assignments must be configured on all new resource groups. You need to enforce SecurityPolicyInitiative1 and the role assignments when a new resource group is created. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0037000001.png)
The correct sequence of actions is: 1. **Create an Azure Blueprint definition:** This involves defining the resources, policies (including SecurityPolicyInitiative1), and role assignments that should be deployed consistently. 2. **Publish an Azure Blueprint version:** This makes the blueprint definition available for assignment. Publishing creates a version, allowing for tracking and management of changes over time. 3. **Assign an Azure Blueprint:** This step applies the blueprint to a scope (e.g., a management group or subscription), automatically deploying the defined resources and policies to new resource groups within that scope. The discussion shows some disagreement on whether Azure Policies are enforced *directly* or through Blueprints. While policies themselves are enforced automatically once defined, Blueprints provide a mechanism for *deploying* and managing consistent sets of resources and policies, making their deployment to new resource groups repeatable and streamlined. Therefore, using Blueprints to deploy the policy and role assignments is the best and most efficient approach in this scenario. The discussion confirms this is the approach most likely to be correct in the context of the exam question, although there are some dissenting voices. Why other options are incorrect: The provided image doesn't list alternative options; only the correct actions within the context of the question. Applying the policy directly without a blueprint would require manual intervention for each new resource group, making it inefficient and inconsistent.
256
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47134-exam-az-500-topic-4-question-40-discussion/) You have an Azure subscription that contains an Azure Sentinel workspace. Azure Sentinel is configured to ingest logs from several Azure workloads. A third-party service management platform is used to manage incidents. You need to identify which Azure Sentinel components to configure to meet the following requirements: ✑ When Azure Sentinel identifies a threat, an incident must be created. ✑ A ticket must be logged in the service management platform when an incident is created in Azure Sentinel. Which component should you identify for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0038400001.png) *(Image shows a table with two rows labeled "Requirement" and "Component" and two columns, one for each requirement listed above)* Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0038500001.png) *(Image shows the completed table with "Analytics rule" selected for the first requirement and "Playbook" selected for the second requirement)* **
** Requirement 1: **Analytics rule** is used to group alerts into incidents. When Azure Sentinel detects a threat (alerts), the analytics rule aggregates these alerts into a single incident. Requirement 2: **Playbook** automates responses to incidents. A playbook can be configured to create a ticket in a third-party service management platform when an incident is triggered in Azure Sentinel. This automates the process of escalating incidents to the external system. Several users in the discussion confirm this answer. **Why other options are incorrect:** The question specifically asks for the Azure Sentinel components to fulfill the requirements. While workbooks, notebooks, etc., are part of the Azure ecosystem, they are not the correct components within Azure Sentinel for incident creation and automated incident response. The provided images in the question and suggested answer explicitly show the correct selections to be Analytics rule and Playbook.
257
[View Question](https://www.examtopics.com/discussions/databricks/view/47161-exam-az-500-topic-2-question-46-discussion/) You need to create a new Azure Active Directory (Azure AD) directory named `12345678.onmicrosoft.com`. The new directory must contain a user named `user12345678` who is configured to sign in by using Azure Multi-Factor Authentication (MFA). Describe the steps to accomplish this task.
The provided solution outlines the steps to create a new Azure AD directory and a user within it, enabling MFA for that user. The process involves: **1. Creating the Azure AD Directory:** * Access the Azure portal and sign in with an account possessing an Azure subscription. * Search for and select "Azure Active Directory." * Choose "Create." * Provide "12345678" as both the Organization name and Initial domain name. This creates the `12345678.onmicrosoft.com` directory. **2. Creating the User:** * Navigate to the Azure Active Directory section in the Azure portal. * Under "Manage," select "Users." * Click "+ New user." * Enter "user12345678" as the User name (and provide a Name). * Select "Create." **3. Enabling MFA for the User:** * The original answer suggests navigating to the user's properties and enabling MFA there. However, the discussion highlights that this method is outdated. The current approach is to create a Conditional Access policy which applies MFA to the user or a group containing the user. This newer method would require configuration of a Conditional Access policy in the Azure portal, specifying the user or group and requiring MFA for sign-in. **Why other options are incorrect (or outdated):** The discussion reveals that the original answer's method for enabling MFA (directly through the user's properties) is no longer the current best practice. While the earlier method might have worked at one point, it's now superseded by using Conditional Access policies for more granular and manageable MFA control. The answer acknowledges this discrepancy.
258
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47180-exam-az-500-topic-4-question-17-discussion/) You have an Azure subscription named Sub1 that is associated to an Azure Active Directory (Azure AD) tenant named contoso.com. You are assigned the Global administrator role for the tenant. You are responsible for managing Azure Security Center settings. You need to create a custom sensitivity label. What should you do? A. Create a custom sensitive information type. B. Elevate access for global administrators in Azure AD. C. Upgrade the pricing tier of the Security Center to Standard. D. Enable integration with Microsoft Cloud App Security. **
** A. Create a custom sensitive information type. To create a custom sensitivity label, you first need to define a custom sensitive information type. Sensitivity labels are built upon these information types; you can't create a label without first defining what kind of information it applies to. **Why other options are incorrect:** * **B. Elevate access for global administrators in Azure AD:** The question states that the user is already a Global Administrator. Elevating access is unnecessary. * **C. Upgrade the pricing tier of the Security Center to Standard:** Upgrading the Security Center pricing tier is unrelated to creating custom sensitivity labels. * **D. Enable integration with Microsoft Cloud App Security:** While Microsoft Cloud App Security is related to security, enabling its integration does not directly allow for creating custom sensitivity labels. **Note:** There is disagreement in the discussion regarding the correct answer. Some users argue that creating a custom sensitive information type is correct (option A), while others suggest that elevating access (option B) is the solution. The provided explanation supports option A, based on the understanding that sensitivity labels rely on pre-defined information types. The validity of option A may also depend on the specific version of the exam and available services at the time of the exam.
259
[View Question](https://www.examtopics.com/discussions/databricks/view/47261-exam-az-500-topic-2-question-54-discussion/) HOTSPOT - You plan to implement an Azure function named Function1 that will create new storage accounts for containerized application instances. You need to grant Function1 the minimum required privileges to create the storage accounts. The solution must minimize administrative effort. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
The solution requires two steps: 1. **System-assigned Managed Identity:** Use a system-assigned managed identity for Function1. This automates the Azure management and lifecycle, minimizing administrative effort compared to manually managing credentials. The identity is tied directly to the Function's lifecycle, simplifying management. 2. **Custom RBAC Role Assignment:** Create a custom Role-Based Access Control (RBAC) role. This allows granting only the *minimum* necessary permissions to create storage accounts, adhering to the principle of least privilege. Pre-built roles often grant excessive permissions, increasing security risks. **Why other options are incorrect:** The question specifically requires minimizing administrative effort and using the minimum required privileges. Using pre-built roles or assigning permissions manually would violate one or both of these requirements. **Note:** The discussion shows some variation in the phrasing of the solution, but the core elements remain consistent: using a system-assigned managed identity and a custom RBAC role for fine-grained permission control.
260
[View Question](https://www.examtopics.com/discussions/databricks/view/47422-exam-az-500-topic-3-question-22-discussion/) You have Azure Resource Manager templates that you use to deploy Azure virtual machines. You need to disable unused Windows features automatically as instances of the virtual machines are provisioned. What should you use? A. device configuration policies in Microsoft Intune B. an Azure Desired State Configuration (DSC) virtual machine extension C. security policies in Azure Security Center D. Azure Logic Apps
B. an Azure Desired State Configuration (DSC) virtual machine extension Azure Desired State Configuration (DSC) is a management platform that allows you to manage the configuration of your servers. It can be used to automatically configure your servers to a desired state, including disabling unused Windows features. Using a DSC extension during VM provisioning ensures the configuration happens as part of the deployment process, automatically applying the desired state. Why other options are incorrect: * **A. device configuration policies in Microsoft Intune:** Intune is primarily for managing mobile devices and applications. While it can manage some aspects of Windows devices, it's not the ideal solution for automating the disabling of Windows features during VM provisioning, as it typically operates on already-deployed systems and requires device enrollment. * **C. security policies in Azure Security Center:** Azure Security Center focuses on security, not on managing the specific configuration state of Windows features. * **D. Azure Logic Apps:** Azure Logic Apps are for building automated workflows. While you *could* potentially create a workflow that manages Windows features via other services, it's an unnecessarily complex approach compared to directly using DSC extensions which are specifically designed for this purpose. Note: The discussion overwhelmingly supports answer B as correct. There is no significant disagreement within the provided discussion.
261
[View Question](https://www.examtopics.com/discussions/databricks/view/47431-exam-az-500-topic-3-question-27-discussion/) SIMULATION - You need to configure a virtual network named VNET2 to meet the following requirements: ✑ Administrators must be prevented from deleting VNET2 accidentally. ✑ Administrators must be able to add subnets to VNET2 regularly. To complete this task, sign in to the Azure portal and modify the Azure resources.
To prevent accidental deletion of VNET2, apply a "Delete" lock to it through the Azure portal. Navigate to VNET2's settings, select "Locks," click "Add," choose "Delete" as the lock type, and click "OK". No additional configuration is needed to allow administrators to add subnets; they typically possess this permission by default. There is some disagreement in the discussion regarding subnet addition permissions. While the suggested answer and several highly upvoted comments state that administrators typically have the necessary permissions to add subnets, other comments suggest assigning the Network Contributor role. The provided solution prioritizes the default permissions, as this is sufficient to meet the requirements. However, the exam might expect a different answer depending on specific scenario details or other exam context not provided here.
262
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47478-exam-az-500-topic-5-question-10-discussion/) You have the Azure key vaults shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0047000001.png) KV1 stores a secret named Secret1 and a key for a managed storage account named Key1. You back up Secret1 and Key1. To which key vaults can you restore each backup? Select the appropriate options. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0047100001.png) **
** Secret1 and Key1 can only be restored to key vaults within the same Azure subscription and geography as KV1. This includes key vaults in paired regions within that geography. Therefore, you can restore to KV1, KV2, and KV3 only if they are in the same subscription and geography as KV1. **Explanation:** The Microsoft documentation and the discussion strongly support the constraint that Key Vault backups can only be restored within the same Azure subscription and geography. The geography constraint allows for restoration to paired regions within that geography. **Why other options are incorrect:** Restoring to key vaults outside of the original subscription or geography is explicitly stated as impossible by the provided information. While some discussion comments suggest restoring to all key vaults, this contradicts the established Microsoft documentation and consensus. There is some disagreement in the discussion comments regarding which key vaults are viable options, but this answer reflects the most widely accepted interpretation based on the provided information and official documentation.
263
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47516-exam-az-500-topic-5-question-31-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso1812.onmicrosoft.com that contains the users shown in the following table. (Image of user table showing User1-User4, their roles, and whether they are guest users) You create an Azure Information Protection label named Label1. The Protection settings for Label1 are configured as shown in the exhibit. (Image of Label1 protection settings showing "Permissions level: Viewer") Label1 is applied to a file named File1. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image showing three statements): 1. User1 can view File1. 2. User3 can print File1. 3. User4 can view File1. **
** Yes, No, Yes * **User1 can view File1 (Yes):** The Azure Information Protection label "Label1" is set to "Viewer" permission level. User1 is a member of the contoso1812.onmicrosoft.com tenant and therefore has the ability to view the file. * **User3 can print File1 (No):** While User3 has Viewer permissions, the Viewer permission level in Azure Information Protection does *not* include the right to print. Multiple users in the discussion confirm this based on Microsoft documentation. * **User4 can view File1 (Yes):** User4 is a guest user, but the question doesn't state any restrictions based on guest user status in relation to the applied AIP label. Therefore, as long as User4 can authenticate (which is implied, although debated in the discussion), they can view the file given the "Viewer" permission level. **Why other options are incorrect:** The provided solution directly contradicts the statement that a user with Viewer permissions can print. The discussion shows a clear consensus on this point, referencing Microsoft's official documentation. There is some initial confusion in the discussion but the ultimately correct answer is supported by multiple users and linked documentation.
264
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47591-exam-az-500-topic-5-question-24-discussion/) You have an Azure subscription that contains an Azure key vault named ContosoKey1. You create users and assign them roles as shown in the following table: | User | Role | |------|------------------------------------| | User 1 | Key Vault Administrator | | User 2 | Key Vault Reader | | User 3 | Key Vault Crypto Officer | | User 4 | Key Vault Contributor | You need to identify which users can perform the following actions: * Delegate permissions for ContosoKey1. * Configure network access to ContosoKey1. Which users should you identify? **
** * **Delegate permissions for ContosoKey1:** User 1 and User 3. User 1 (Key Vault Administrator) has full control over the Key Vault, allowing permission delegation. User 3 (Key Vault Crypto Officer) has permissions to manage cryptographic operations, which implicitly includes managing permissions. * **Configure network access to ContosoKey1:** User 1 and User 4. User 1 (Key Vault Administrator) has full control. User 4 (Key Vault Contributor) has the `Microsoft.KeyVault/*` permission, granting them full access to manage all aspects of the Key Vault including network settings. **Explanation:** The Key Vault Administrator role provides complete access. The Key Vault Contributor role, while not explicitly stating network access management, implicitly allows it due to its comprehensive permissions (`Microsoft.KeyVault/*`). The Crypto Officer role allows for managing cryptographic operations which includes managing access. The Key Vault Reader role only allows read access; it cannot modify permissions or network settings. **Why other options are incorrect:** User 2 (Key Vault Reader) lacks the necessary permissions to modify the key vault's configuration or permissions. **Note:** The discussion reveals some disagreement on the interpretation of the Key Vault Contributor role's capabilities. While the suggested answer and the highly upvoted response are consistent in their assertion that the Key Vault Contributor role allows network access configuration, one commenter expresses doubt regarding the interpretation of the `Microsoft.Resources/subscriptions/resourceGroups/read` permission. This highlights that understanding the specific permissions granted by each role is crucial.
265
[View Question](https://www.examtopics.com/discussions/databricks/view/47630-exam-az-500-topic-4-question-76-discussion/) You are troubleshooting a security issue for an Azure Storage account. You enable the diagnostic logs for the storage account. What should you use to retrieve the diagnostics logs? A. Azure Storage Explorer B. SQL query editor in Azure C. File Explorer in Windows D. Azure Security Center
A. Azure Storage Explorer Azure Storage Explorer is a GUI tool specifically designed to interact with Azure Storage accounts. Diagnostic logs are typically stored as blobs within the storage account, and Azure Storage Explorer provides a user-friendly interface to browse and download these logs. Why other options are incorrect: * **B. SQL query editor in Azure:** Diagnostic logs are not stored in a SQL database, so a SQL query editor would be inappropriate. * **C. File Explorer in Windows:** File Explorer cannot directly access Azure Storage resources. An intermediary tool is needed. * **D. Azure Security Center:** While Azure Security Center might *display* some security-related information from the storage account logs, it doesn't directly provide access to all the raw diagnostic logs. Note: The discussion highlights that several other methods can retrieve the logs (AzCopy, Azure CLI, directly from the storage account, and Azure Monitor with KQL). While Azure Storage Explorer is a valid and arguably the most user-friendly option given the question, the other methods are also functional. The question does not specify which tool is *best*, only which tool *can* retrieve the logs.
266
[View Question](https://www.examtopics.com/discussions/databricks/view/47724-exam-az-500-topic-4-question-41-discussion/) HOTSPOT - You have an Azure subscription. You need to create and deploy an Azure policy that meets the following requirements: ✑ When a new virtual machine is deployed, automatically install a custom security extension. ✑ Trigger an autogenerated remediation task for non-compliant virtual machines to install the extension. What should you include in the policy? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0038600001.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0038700001.png)
The correct answer includes selecting "DeployIfNotExists" and assigning a Managed Identity with Contributor access. * **DeployIfNotExists:** This setting ensures that the custom security extension is installed only if it doesn't already exist on the virtual machine. This prevents unnecessary re-installations and potential conflicts. * **Managed Identity with Contributor access:** A managed identity provides an automated way for the policy to interact with Azure resources. Granting it Contributor role allows it the necessary permissions to install the custom extension on the virtual machines. The policy leverages this identity to automatically trigger remediation for non-compliant VMs. Why other options are incorrect: The question is a multiple choice type, and based on the provided information and the overwhelmingly agreed upon answer in the discussion, there are no other valid options provided. The discussion strongly supports "DeployIfNotExists" and a Managed Identity with Contributor access as the correct choices. Note: While the provided discussion shows a high degree of consensus on the answer, there is no alternative answer or reasoning given within the discussion to discuss and refute.
267
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47728-exam-az-500-topic-4-question-46-discussion/) DRAG DROP - You have an Azure Sentinel workspace that has an Azure Active Directory (Azure AD) data connector. You are threat hunting suspicious traffic from a specific IP address. You need to annotate an intermediate event stored in the workspace and be able to reference the IP address when navigating through the investigation graph. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0039000001.png) **
** The correct sequence of actions is: 1. **Run a query to identify the event containing the suspicious IP address.** This step is crucial to pinpoint the specific event you want to annotate. Without identifying the event, you can't annotate it. 2. **Create a bookmark for the event.** Bookmarks are used to annotate events in Azure Sentinel, allowing you to add context and notes related to the event. This is necessary for later referencing. 3. **Add the IP address as an entity to the bookmark.** Linking the IP address as an entity to the bookmark enables you to use the IP address as a navigation point within the investigation graph, allowing easier tracking and analysis. The image provided shows these steps but the text within the image is not readable. However, the discussion clearly indicates that this is the correct sequence. Several users have independently confirmed this answer as correct and accurate based on their exam experience. One user mentions that associating the IP address to the alert is important to be able to reference it from the graph. This is implied in the process; the bookmarking steps inherently link the IP address and the event. **Why other options are incorrect:** The provided image shows a set of actions and their order that could be used to answer the question. There is no way from this context to evaluate how other possible options would not follow the correct sequence of these steps or solve the scenario as a whole. The discussion section overwhelmingly confirms the suggested answer's accuracy.
268
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47738-exam-az-500-topic-5-question-22-discussion/) DRAG DROP - You have an Azure subscription named Sub1 that contains an Azure Storage account named contosostorage1 and an Azure key vault named Contosokeyvault1. You plan to create an Azure Automation runbook that will rotate the keys of contosostorage1 and store them in Contosokeyvault1. You need to implement prerequisites to ensure that you can implement the runbook. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0048800001.jpg) **
** The correct order of actions is: 1. **Create an Azure Automation account:** This creates the environment where the runbook will reside and also automatically creates the "Run As" account needed for authentication. 2. **Run Set-AzKeyVaultAccessPolicy:** This grants the Azure Automation "Run As" account the necessary permissions to access and manage keys within Contosokeyvault1. This step is crucial *before* importing modules, as the modules require proper authentication to function. 3. **Import PowerShell modules:** This adds the required Azure PowerShell modules (like AzureRM.KeyVault) to the Automation account, enabling the runbook to interact with Azure Key Vault. **Why other options are incorrect:** The suggested answer in the original post and the discussion show some disagreement on the optimal sequence. The original suggestion places importing modules before granting access, which is incorrect because the modules cannot function without proper authentication and authorization. Granting access (Step 2) must precede the module import (Step 3). **Note:** The discussion highlights differing opinions on the correct sequence of steps. While the provided answer reflects a logical and functional approach, there's acknowledged debate on the exact ordering. The key takeaway is that proper authentication and authorization to the Key Vault must be established *before* attempting to use the Key Vault modules.
269
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47740-exam-az-500-topic-5-question-23-discussion/) HOTSPOT - You have an Azure Storage account that contains a blob container named container1 and a client application named App1. You need to enable App1 access to container1 by using Azure Active Directory (Azure AD) authentication. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0049100001.png) *(Image shows a hotspot question with two boxes, Box 1 and Box 2)* Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0049200001.png) *(Image shows the suggested answer with "Register App1" selected for Box 1 and "Configure Access Control (IAM)" selected for Box 2)* **
** Box 1: **Register App1** Box 2: **Configure Access Control (IAM)** To enable App1 to access container1 using Azure AD authentication, you must first register App1 as an application within Azure AD. This registration creates a service principal representing App1 in Azure AD. Then, you need to configure access control (IAM) on the storage account. This involves assigning a suitable role (e.g., Storage Blob Data Contributor) to the service principal representing App1, granting it the necessary permissions to access container1. The discussion shows a consensus on this answer. Several users explicitly state this solution as correct and received the question on the actual exam. **Why other options are incorrect:** There are no other options explicitly presented in the question besides the two boxes that need to be filled. The process described is the only correct way to achieve Azure AD authentication for accessing Azure Blob storage using a client application. Omitting either step would prevent the application from authenticating and accessing the container.
270
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47780-exam-az-500-topic-4-question-2-discussion/) You have an Azure subscription named Sub1. Sub1 has an Azure Storage account named storage1 that contains the resources shown in the following table. | Resource Type | Resource Name | |---|---| | Container | Container1 | | Share | Share1 | You generate a shared access signature (SAS) to connect to the blob service and the file service. Which tool can you use to access the contents in Container1 and Share1 by using the SAS? Select the appropriate options. NOTE: Each correct selection is worth one point. (Image of selectable options is included in the original question but not reproducible here; The suggested answer shows Azure Storage Explorer selected.) **
** Azure Storage Explorer. Azure Storage Explorer is a free standalone app from Microsoft that allows users to easily manage Azure Storage resources. Since it can directly interact with both blob storage (Container1) and file storage (Share1) using SAS tokens (as confirmed by the provided link to Microsoft documentation), it's the correct tool. **Why other options are incorrect:** The question does not provide other options, but based on the nature of the question, other tools that might be considered but are less suitable or unsuitable include command-line tools (requiring more technical expertise), Azure portal (less direct access and potentially cumbersome for bulk operations), or other third-party tools (which might not be guaranteed to support SAS tokens). The provided suggested answer only indicates Azure Storage Explorer as the correct solution. There is no discussion indicating disagreement with this answer.
271
[View Question](https://www.examtopics.com/discussions/databricks/view/47781-exam-az-500-topic-3-question-39-discussion/) You are testing an Azure Kubernetes Service (AKS) cluster. The cluster is configured as shown in the exhibit. (Click the Exhibit tab.) You plan to deploy the cluster to production. You disable HTTP application routing. You need to implement application routing that will provide reverse proxy and TLS termination for AKS services by using a single IP address. What should you do? A. Create an AKS Ingress controller. B. Install the container network interface (CNI) plug-in. C. Create an Azure Standard Load Balancer. D. Create an Azure Basic Load Balancer.
A. Create an AKS Ingress controller. An AKS Ingress controller acts as a reverse proxy, routing traffic to services within the AKS cluster and terminating TLS encryption. This fulfills the requirement of using a single IP address for application routing, providing both reverse proxy and TLS termination. The overwhelming consensus among the discussion participants supports this answer. Why other options are incorrect: * **B. Install the container network interface (CNI) plug-in:** CNIs handle network connectivity *within* the Kubernetes cluster; they don't provide external routing or TLS termination. * **C. Create an Azure Standard Load Balancer:** While a load balancer can distribute traffic, it doesn't inherently provide the reverse proxy functionality or TLS termination needed. It would require additional configuration beyond the scope of the question. * **D. Create an Azure Basic Load Balancer:** Similar to the Standard Load Balancer, a Basic Load Balancer lacks the built-in features for reverse proxy and TLS termination required by the question. Note: The discussion shows unanimous agreement on answer A being correct, with multiple users reporting encountering this question on the AZ-500 exam.
272
[View Question](https://www.examtopics.com/discussions/databricks/view/47802-exam-az-500-topic-3-question-52-discussion/) You are securing access to the resources in an Azure subscription. A new company policy states that all the Azure virtual machines in the subscription must use managed disks. You need to prevent users from creating virtual machines that use unmanaged disks. What should you use? A. Azure Monitor B. Azure Policy C. Azure Security Center D. Azure Service Health
B. Azure Policy Azure Policy is the correct answer because it allows you to define and enforce rules for resource properties during creation and updates. By creating a custom policy that mandates the use of managed disks for virtual machines and assigning it to the subscription, any attempt to create a VM with unmanaged disks will be blocked. Why other options are incorrect: * **A. Azure Monitor:** Azure Monitor is for collecting and analyzing telemetry data from Azure resources. It doesn't directly control resource creation or configuration. * **C. Azure Security Center:** Azure Security Center focuses on security threats and vulnerabilities. While it can indirectly influence resource configuration through recommendations, it's not the primary tool for enforcing specific resource properties like disk type. * **D. Azure Service Health:** Azure Service Health provides information about the health and availability of Azure services. It doesn't have the capability to control resource creation. There is a consensus among the discussion participants that Azure Policy is the correct answer.
273
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47803-exam-az-500-topic-4-question-70-discussion/) SIMULATION - A user named Debbie has the Azure app installed on her mobile device. You need to ensure that [email protected] is alerted when a resource lock is deleted. To complete this task, sign in to the Azure portal. **
** To alert [email protected] when a resource lock is deleted, you must configure an alert rule in Azure Monitor. This involves creating a new alert rule, specifying the scope (selecting the relevant subscription and filtering by resource type "Management locks"), defining the condition (selecting "Delete management locks"), and creating an action group. The action group should be configured to send Azure app push notifications to [email protected]. The suggested answer provides a step-by-step guide to accomplish this. **Why other options are incorrect:** There are multiple slightly different approaches suggested in the discussion. However, the core steps remain the same: using Azure Monitor to create an alert rule that triggers on the deletion of management locks and configuring an action group to send a notification to the specified email address via the Azure app. The differences in the descriptions are minor variations on the same process and do not represent fundamentally different solutions. **Note:** The discussion includes conflicting opinions regarding whether the AZ-500 exam actually includes simulation questions. This question, however, is presented as a simulation.
274
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47816-exam-az-500-topic-4-question-11-discussion/) SIMULATION - You need to email an alert to a user named [email protected] if the average CPU usage of a virtual machine named VM1 is greater than 70 percent for a period of 15 minutes. To complete this task, sign in to the Azure portal. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0034200001.jpg) **
** The provided screenshot and initial suggested answer describe an outdated method for creating alerts in the Azure portal. The current method involves creating a new alert rule and associating it with an action group to send email notifications. The steps would be: 1. **Navigate to the VM:** In the Azure portal, locate and select the virtual machine named VM1. 2. **Create a New Alert Rule:** Select "Alerts" (the exact wording may vary slightly) and then click "New alert rule." Do not use the "Alerts (Classic)" option as this is deprecated. 3. **Configure the Alert Rule:** Define the condition: "CPU Percentage" > 70%, "Period" = 15 minutes. 4. **Create or Select an Action Group:** Create a new action group or select an existing one to specify notification methods. Add the email address [email protected] to receive alerts. 5. **Save the Alert Rule:** Complete the creation process to save your new alert rule. **Why other options are incorrect:** The original suggested answer using "Alerts (Classic)" is outdated and will likely not function correctly in a modern Azure environment. The discussion highlights this obsolescence, with multiple users indicating the answer is deprecated. The comments also suggest that simply creating the alert rule isn't sufficient; an action group is necessary to specify the email notification. Therefore, any solution that doesn't involve creating a new alert rule and configuring an action group is incomplete. The fact that the simulation doesn't require login is irrelevant to answering the core question of how to set up the alert.
275
[View Question](https://www.examtopics.com/discussions/databricks/view/47843-exam-az-500-topic-2-question-18-discussion/) DRAG DROP - You create an Azure subscription with Azure AD Premium P2. You need to ensure that you can use Azure Active Directory (Azure AD) Privileged Identity Management (PIM) to secure Azure AD roles. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0008100001.png)
The correct sequence of actions, according to the suggested answer image ([Image](https://www.examtopics.com/assets/media/exam-media/04258/0008100002.png)), is: 1. **Enable PIM for Azure AD roles:** This is the fundamental step to activate PIM's functionality for managing Azure AD roles. 2. **Verify your identity using multi-factor authentication (MFA):** This is a security measure required before you can make changes to PIM settings and assign roles. 3. **Sign up PIM for Azure AD roles:** This completes the activation process, allowing you to manage the roles through PIM. The discussion section highlights that the question may be outdated. Several comments indicate that PIM is now automatically enabled in many cases, rendering these steps potentially unnecessary in current Azure environments. Therefore, while the above answer reflects the suggested solution, its accuracy depends on the specific Azure environment configuration.
276
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47844-exam-az-500-topic-4-question-42-discussion/) You have an Azure subscription named Subscription1 that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0038700002.png) You need to identify which initiatives and policies you can add to Subscription1 by using Azure Security Center. What should you identify? A. Policy1 and Policy2 only B. Initiative1 only C. Initiative1 and Initiative2 only D. Initiative1, Initiative2, Policy1, and Policy2 **
** C. Initiative1 and Initiative2 only **Explanation:** The discussion shows a disagreement on the correct answer. The majority view supports option C, arguing that Azure Security Center allows the addition of *initiatives* (which are collections of policies), but not individual policies directly. While some users claim you can add custom policies, the prevailing and more strongly supported argument is that you add initiatives, which contain policies. Therefore, given the options presented, only Initiative1 and Initiative2 can be added directly to Subscription1 via Azure Security Center. **Why other options are incorrect:** * **A. Policy1 and Policy2 only:** Incorrect because Azure Security Center primarily works with initiatives, which group policies together. You don't add individual policies directly. * **B. Initiative1 only:** Incorrect because the question implies multiple initiatives are available and the provided context doesn't exclude the possibility of adding more than one. * **D. Initiative1, Initiative2, Policy1, and Policy2:** Incorrect. While you can create custom policies (and these policies are ultimately part of initiatives), you don't add them independently to a subscription through Azure Security Center. **Note:** There is a clear disagreement in the discussion regarding the ability to add custom policies independently. The answer provided reflects the majority opinion and the most likely interpretation based on the typical functionality of Azure Security Center.
277
**** [View Question](https://www.examtopics.com/discussions/databricks/view/47847-exam-az-500-topic-4-question-51-discussion/) You have an Azure subscription that contains the resources shown in the following table. | Resource Group | Resource | Resource Type | Status | |-----------------|---------------|----------------|--------| | RG1 | VM1 | Virtual Machines | Stopped | | RG1 | VM2 | Virtual Machines | Stopped | VM1 and VM2 are stopped. You create an alert rule that has the following settings: Resource: RG1 Condition: All Administrative operations Actions: Action groups configured for this alert rule: ActionGroup1 Alert rule name: Alert1 You create an action rule that has the following settings: Scope: VM1 Filter criteria: Resource Type = "Virtual Machines" Define on this scope: Suppression Suppression config: From now (always) Name: ActionRule1 For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. The scope for the action rule is set to VM1 and is set to suppress alerts indefinitely. 2. The scope for the action rule is not set to VM2. 3. Adding a tag is not an administrative operation.
Yes, Yes, No. 1. **Yes:** The action rule explicitly targets VM1 and is configured to suppress alerts indefinitely ("Suppression config: From now (always)"). 2. **Yes:** The action rule's scope is limited to VM1. Therefore, it does not apply to VM2. 3. **No:** The discussion shows conflicting opinions, but a strong majority and supporting evidence (including activity log examples) indicate that adding a tag *is* considered an administrative operation in Azure. The activity log shows "category": {"value": "Administrative", "localizedValue": "Administrative"} when a tag is added. **Why other options are incorrect (based on the provided information and discussion):** The discussion reveals a debate regarding whether adding a tag is an administrative operation. While some initially disagreed, the provided evidence from activity logs and user testing strongly supports the consensus that it is. Therefore, the answer reflects the preponderance of evidence. Note: The provided images are essential to understanding the question, but the text extracted from those images is fully included in the question above.
278
**** [View Question](https://www.examtopics.com/discussions/databricks/view/48169-exam-az-500-topic-5-question-18-discussion/) HOTSPOT - You need to create an Azure key vault. The solution must ensure that any object deleted from the key vault be retained for 90 days. How should you complete the command? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0048300001.png) *(Image shows two blank boxes labeled "Box 1:" and "Box 2:" to be filled in)* **
** Box 1: `-EnablePurgeProtection` Box 2: `-EnableSoftDelete` To ensure objects deleted from the Azure Key Vault are retained for 90 days, both `-EnableSoftDelete` and `-EnablePurgeProtection` are required. `-EnableSoftDelete` enables the soft-delete functionality, allowing recovery of deleted objects within a grace period. `-EnablePurgeProtection` prevents the permanent deletion (purge) of the vault and its contents, even after the soft-delete retention period expires. The combination of these two ensures a 90-day retention period (the default for soft delete, though this can be configured). **Why other options are incorrect (or partially incorrect):** The discussion highlights some disagreement about whether `-EnableSoftDelete` is necessary as it's enabled by default in newer versions of Azure. However, the question's context doesn't explicitly state the Azure version; therefore, including `-EnableSoftDelete` is necessary to guarantee the 90-day retention, especially in cases where it might not be the default setting or is configured differently. While some comments suggest soft delete is always on, it’s better to explicitly set it to be certain and as per the documentation referenced in the original question. Omitting `-EnablePurgeProtection` would leave the vault vulnerable to immediate and permanent deletion, not meeting the requirement of 90-day retention. Therefore, both options are needed.
279
[View Question](https://www.examtopics.com/discussions/databricks/view/48170-exam-az-500-topic-4-question-9-discussion/) You are troubleshooting a security issue for an Azure Storage account. You enable the diagnostic logs for the storage account. What should you use to retrieve the diagnostics logs? A. the Security & Compliance admin center B. Azure Security Center C. Azure Cosmos DB explorer D. AzCopy
D. AzCopy AzCopy is a command-line utility used to copy blobs or files to or from Azure Blob Storage. Since diagnostic logs for Azure Storage accounts are typically stored in blob storage, AzCopy is the appropriate tool to retrieve them. Why other options are incorrect: * **A. the Security & Compliance admin center:** This center manages security and compliance settings, but doesn't directly retrieve storage account diagnostic logs. * **B. Azure Security Center:** Azure Security Center focuses on security monitoring and threat protection across Azure resources. While it might indirectly show alerts related to storage account security, it's not the primary tool for retrieving diagnostic logs. * **C. Azure Cosmos DB explorer:** This tool is used to manage Azure Cosmos DB databases, which is unrelated to Azure Storage diagnostic logs. Note: The discussion shows unanimous agreement on the correct answer, D. AzCopy.
280
**** [View Question](https://www.examtopics.com/discussions/databricks/view/48183-exam-az-500-topic-5-question-26-discussion/) You have an Azure subscription named Sub1. Sub1 contains an Azure virtual machine named VM1 that runs Windows Server 2016. You need to encrypt VM1 disks by using Azure Disk Encryption. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0049600001.png) *(Image contains a drag-and-drop interface with options: Create an Azure Key Vault; Configure access policies for the Azure Key Vault; Install the Azure Disk Encryption extension; Run Set-AzVMDiskEncryptionExtension)* **
** The correct sequence of actions is: 1. **Create an Azure Key Vault:** This is the first step because Azure Disk Encryption relies on Azure Key Vault to store and manage the encryption keys. Without a Key Vault, you can't proceed with encryption. 2. **Configure access policies for the Azure Key Vault:** After creating the Key Vault, you must configure access policies to grant Azure Disk Encryption the necessary permissions to access and use the keys stored within. This ensures that the service can properly encrypt and decrypt the VM disks. 3. **Run Set-AzVMDiskEncryptionExtension:** This PowerShell cmdlet (or its equivalent in other tools) executes the actual encryption process on the VM disks, leveraging the Key Vault and its configured policies. **Why other options are incorrect (or the order is incorrect):** * **Installing the Azure Disk Encryption extension before creating the Key Vault:** This is incorrect because the extension needs a Key Vault to function correctly. You cannot encrypt disks without a place to store and manage the encryption keys. * **Configuring access policies after running the Set-AzVMDiskEncryptionExtension:** This is incorrect as the encryption process depends on the correct access policies being already in place. Without proper access, the encryption command would fail. * **Incorrect ordering of steps:** Any other ordering besides the one above would result in failure. The process is sequential and dependent. **Note:** The discussion section shows unanimous agreement on the provided solution.
281
[View Question](https://www.examtopics.com/discussions/databricks/view/48254-exam-az-500-topic-2-question-38-discussion/) You need to register the app named App12345678 to Azure Active Directory (Azure AD) using the sign-on URLs of `https://app.contoso.com`. What steps are required to complete this task in the Azure portal? The image below shows part of the process. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0012400001.jpg)
To register App12345678 in Azure AD with the sign-on URL `https://app.contoso.com`, follow these steps: 1. **Sign in to the Azure portal:** Access the Azure portal using your credentials. 2. **Navigate to Azure Active Directory:** Select Azure Active Directory from the services list. 3. **Access App registrations:** Select "App registrations". 4. **Create a new registration:** Click "New registration". 5. **Provide application details:** * Name the application "App12345678". * Choose a supported account type (this will depend on your specific requirements). * Under "Redirect URI", select "Web" and enter `https://app.contoso.com` as the URI. 6. **Register the application:** Click "Register". This process registers the application in Azure AD and configures the specified sign-on URL. The discussion shows some disagreement about whether or not simulations like this are currently included in the exam. Some users indicate simulations are back as of March 24, 2023, while others state they were absent for a period after COVID. The answer provided reflects the steps necessary based on the provided text and image. The suggestion in the discussion to use "Branding & properties > Home page URL" is incorrect because the question specifically asks to configure the *sign-on* URL, which is handled during the application registration process, not in branding settings.
282
[View Question](https://www.examtopics.com/discussions/databricks/view/48270-exam-az-500-topic-3-question-11-discussion/) You have Azure Resource Manager templates that you use to deploy Azure virtual machines. You need to disable unused Windows features automatically as instances of the virtual machines are provisioned. What should you use? A. device configuration policies in Microsoft Intune B. Azure Automation State Configuration C. security policies in Azure Security Center D. device compliance policies in Microsoft Intune
B. Azure Automation State Configuration Azure Automation State Configuration is the correct answer because it allows you to manage the configuration of your Azure VMs. It can be used to automatically configure and enforce desired states, including disabling unused Windows features during provisioning. This aligns perfectly with the requirement of automatically disabling unused features as the VMs are deployed. Why other options are incorrect: * **A. device configuration policies in Microsoft Intune:** Intune is primarily for managing endpoints (laptops, phones, etc.), not necessarily Azure VMs directly during the deployment process managed through ARM templates. While Intune *can* manage VMs, it's not the optimal or most direct solution for this specific scenario. * **C. security policies in Azure Security Center:** Azure Security Center focuses on security-related configurations. While disabling unused features can improve security, Security Center isn't the primary tool for managing these low-level operating system configurations during VM deployment. * **D. device compliance policies in Microsoft Intune:** Similar to A, this is about endpoint management rather than directly controlling VM configuration during provisioning via ARM templates. The discussion overwhelmingly supports option B as the correct answer. There is a consensus among the users that Azure Automation State Configuration is the best approach for this scenario.
283
[View Question](https://www.examtopics.com/discussions/databricks/view/48539-exam-az-500-topic-5-question-33-discussion/) Your network contains an on-premises Active Directory domain named contoso.com. The domain contains a user named User1. You have an Azure subscription that is linked to an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant contains an Azure Storage account named storage1. Storage1 contains an Azure file share named share1. Currently, the domain and the tenant are not integrated. You need to ensure that User1 can access share1 by using his domain credentials. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0050800001.png)
The correct sequence of actions is: 1. **Implement Azure AD Connect:** This synchronizes your on-premises Active Directory with your Azure AD tenant, allowing users in contoso.com to authenticate using their domain credentials. 2. **Enable an AD source for Azure File shares:** This step configures Azure Files to trust the Azure AD identities that were synchronized in the previous step. This is crucial for authentication to work. 3. **Assign share-level permissions for share1 to User1:** Finally, you grant the specific user (User1) the necessary permissions to access share1. Without this step, even if authentication is working, User1 won't have access to the files. The suggested answer and the highly upvoted comments in the discussion support this sequence. WHY OTHER OPTIONS ARE INCORRECT (implied): The question is a drag-and-drop; therefore, any other order or missing steps would be incorrect because they wouldn't allow User1 to access share1 using his domain credentials. The process requires the synchronization and configuration of the AD and Azure AD before assigning permissions.
284
[View Question](https://www.examtopics.com/discussions/databricks/view/48626-exam-az-500-topic-4-question-10-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. [Image depicting a table of VMs - details not provided as image is not visible] From Azure Security Center, you turn on Auto Provisioning. You deploy the virtual machines shown in the following table. [Image depicting a table of VMs - details not provided as image is not visible] On which virtual machines is the Microsoft Monitoring Agent installed? A. VM3 only B. VM1 and VM3 only C. VM3 and VM4 only D. VM1, VM2, VM3, and VM4
D. VM1, VM2, VM3, and VM4 The correct answer is D because enabling Auto Provisioning in Azure Security Center installs the required agents (including the Microsoft Monitoring Agent, or its successor, the Azure Monitoring Agent) on all existing and newly deployed virtual machines. The question states that Auto Provisioning was turned on *before* the deployment of VM3 and VM4. Therefore, the agent would be installed on all VMs. While some discussion mentions the agent may be known as the Azure Monitoring Agent (AMA), this does not change the fundamental outcome of enabling auto-provisioning which installs all necessary monitoring agents on all VMs, whether existing or newly deployed. The consensus among the users in the discussion supports answer D.
285
**** [View Question](https://www.examtopics.com/discussions/databricks/view/49542-exam-az-500-topic-3-question-5-discussion/) You need to perform a full malware scan every Sunday at 02:00 on a virtual machine named VM1 by using Microsoft Antimalware for Virtual Machines. To complete this task, sign in to the Azure portal. Images below show part of the process. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0022700001.jpg) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0022900001.png) **
** To configure a full malware scan on VM1 every Sunday at 2:00 AM using the Microsoft Antimalware extension in the Azure portal, follow these steps: 1. **Access the VM:** In the Azure portal, navigate to the VM1 blade. 2. **Add the Extension:** Go to the Extensions section and click "Add". 3. **Select the Extension:** Choose the "Microsoft Antimalware" extension and click "Create". 4. **Configure the Schedule:** In the installation settings, enable the scheduled scan. Set the "Scan type" to "Full", the "Scan day" to "Sunday", and the "Scan time" to 120 (representing 2:00 AM; 60 minutes increments are used, where 0 = 12:00 AM, 60 = 1:00 AM, 120 = 2:00 AM, and so on). **Why other options are incorrect:** The question specifically requires a *full* scan at a *specific time* on a *specific day*. Any configuration that omits any of these three criteria would be incomplete and therefore incorrect. Some discussion comments suggest potential confusion around the time setting (using 120 minutes to represent 2 AM). However, the provided images and explanations clarify this time representation. **Note:** There is some discussion in the comments regarding the age and potential obsolescence of the question/solution. The answer provided above reflects the information presented in the original content.
286
[View Question](https://www.examtopics.com/discussions/databricks/view/50592-exam-az-500-topic-5-question-40-discussion/) DRAG DROP - You have an Azure Storage account named storage1 and an Azure virtual machine named VM1. VM1 has a premium SSD managed disk. You need to enable Azure Disk Encryption for VM1. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange then in the correct order. Select and Place: (Image contains the following options: Create an Azure Key Vault; Set the Key Vault access policy to Enable access to Azure Disk Encryption for volume encryption; Run the Set-AzVMDiskEncryptionExtension cmdlet; Install the Azure Disk Encryption extension on VM1; Configure the Azure Disk Encryption extension on VM1)
The correct sequence of actions to enable Azure Disk Encryption for VM1 is: 1. **Create an Azure Key Vault:** This is the first step because you need a Key Vault to store the encryption keys. 2. **Set the Key Vault access policy to Enable access to Azure Disk Encryption for volume encryption:** This grants Azure Disk Encryption the necessary permissions to use the keys stored in the Key Vault. 3. **Run the Set-AzVMDiskEncryptionExtension cmdlet:** This cmdlet applies the encryption using the keys stored in the Key Vault. The other options relating to installing and configuring the extension are implied within this step. **Why other options are incorrect (or not the first step):** While installing and configuring the Azure Disk Encryption extension are necessary steps in the overall process, they aren't the first steps. The Key Vault must exist and be properly configured before the extension can be utilized. Therefore, directly installing and configuring the extension before creating and configuring the Key Vault is incorrect. Note: The provided solution is based on the user insights in the discussion. There is no inherent disagreement in the provided text, all users agree on the final solution.
287
[View Question](https://www.examtopics.com/discussions/databricks/view/51273-exam-az-500-topic-4-question-14-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. (Image 1 shows a table of VMs, but the image itself is not included in this text). From Azure Security Center, you turn on Auto Provisioning. You deploy the virtual machines shown in the following table. (Image 2 shows a table of VMs, but the image itself is not included in this text). On which virtual machines is the Log Analytics Agent installed? A. VM3 only B. VM1 and VM3 only C. VM3 and VM4 only D. VM1, VM2, VM3, and VM4
D. VM1, VM2, VM3, and VM4 Explanation: When Auto Provisioning is turned on in Azure Security Center, the Log Analytics agent is automatically installed on all virtual machines within the subscription. Therefore, the agent will be installed on VM1, VM2, VM3, and VM4. Note that some discussion comments mention that the Log Analytics Agent is deprecated, and has been replaced by the Azure Monitor Agent. However, based solely on the provided question and context, the answer remains D. Why other options are incorrect: * **A. VM3 only:** This is incorrect because Auto Provisioning installs the agent on all VMs. * **B. VM1 and VM3 only:** This is incorrect because Auto Provisioning installs the agent on all VMs. * **C. VM3 and VM4 only:** This is incorrect because Auto Provisioning installs the agent on all VMs. Note: The discussion highlights that the question may be a duplicate or a repeat, and that the Log Analytics Agent is deprecated. This information is noted for context, but the answer provided is based solely on the information presented in the question itself.
288
[View Question](https://www.examtopics.com/discussions/databricks/view/51288-exam-az-500-topic-4-question-57-discussion/) You are troubleshooting a security issue for an Azure Storage account. You enable the diagnostic logs for the storage account. What should you use to retrieve the diagnostics logs? A. Azure Security Center B. Azure Monitor C. the Security admin center D. Azure Storage Explorer
D. Azure Storage Explorer Azure Storage Explorer is a tool specifically designed for managing Azure Storage accounts. It provides a user-friendly interface to view diagnostic logs (which are a type of storage analytics log). Why other options are incorrect: * **A. Azure Security Center:** While Azure Security Center deals with security, it's not the primary tool for retrieving storage account diagnostic logs. * **B. Azure Monitor:** Azure Monitor is a centralized monitoring service. Diagnostic logs *can* be routed to Azure Monitor (specifically a Log Analytics workspace or a storage account), but retrieving them directly from Monitor might require more complex queries, unlike the direct access provided by Storage Explorer. The discussion highlights this ambiguity. * **C. the Security admin center:** Similar to Azure Security Center, it focuses on broader security management and not specifically storage account diagnostics. Note: There is some disagreement in the discussion regarding the correct answer. Some users suggest Azure Monitor or even AzCopy as alternatives, depending on where the diagnostic logs are configured to be sent. The explanation provided here considers the most direct and user-friendly approach which would be the most straightforward answer for someone troubleshooting directly using this particular tool.
289
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51289-exam-az-500-topic-4-question-58-discussion/) You have an Azure subscription that contains the resources shown in the following table. | Resource | Resource Type | |-----------------|----------------| | VM1 | Virtual Machine | | VNET1 | Virtual Network | | storage1 | Storage Account| | Vault1 | Key Vault | You plan to enable Azure Defender for the subscription. Which resources can be protected by using Azure Defender? A. VM1, VNET1, storage1, and Vault1 B. VM1, VNET1, and storage1 only C. VM1, storage1, and Vault1 only D. VM1 and VNET1 only E. VM1 and storage1 only **
** C. VM1, storage1, and Vault1 only Azure Defender can protect virtual machines (VM1), storage accounts (storage1), and Key Vaults (Vault1). However, there is disagreement regarding whether Azure Defender directly protects Virtual Networks (VNET1). While some sources suggest that Azure Defender offers network security *recommendations* and monitoring for VNETs, it does not offer the same level of direct protection as it does for VMs, storage accounts, and Key Vaults. The consensus from the discussion leans towards option C because the direct protection of the VNET itself is not consistently confirmed. **Why other options are incorrect:** * **A:** Incorrect because VNET protection by Azure Defender is debated and generally understood to be indirect through recommendations and monitoring, not direct protection. * **B:** Incorrect for the same reason as A; VNET protection is not consistently confirmed as direct protection by Azure Defender. * **D:** Incorrect because it omits storage1 and Vault1, both of which are protectable by Azure Defender. * **E:** Incorrect because it omits Vault1, which is protectable by Azure Defender. **Note:** The discussion reveals conflicting opinions on whether Azure Defender directly protects Virtual Networks. The answer reflects the majority opinion and the understanding that while network security *recommendations* are provided for VNETs, it's not a direct protection in the same manner as for other listed resources.
290
[View Question](https://www.examtopics.com/discussions/databricks/view/52624-exam-az-500-topic-3-question-13-discussion/) You have Azure Resource Manager templates that you use to deploy Azure virtual machines. You need to disable unused Windows features automatically as instances of the virtual machines are provisioned. What should you use? A. device configuration policies in Microsoft Intune B. an Azure Desired State Configuration (DSC) virtual machine extension C. application security groups D. device compliance policies in Microsoft Intune
The correct answer is B. an Azure Desired State Configuration (DSC) virtual machine extension. DSC extensions allow for configuration management during VM provisioning. This directly addresses the requirement to disable unused Windows features automatically *as* the VMs are created. Intune (options A and D) manages devices after they are deployed and enrolled, which is not the scenario described in the question. The question specifies the need for automation *during* provisioning. Application security groups (option C) are network security constructs, unrelated to managing Windows features. There is some disagreement in the discussion regarding the correct answer. Some users incorrectly suggest Intune as a solution. However, the consensus among the majority of users, and based on the functionality of each option, points to Azure DSC as the correct solution for automating the disabling of unused Windows features during VM provisioning.
291
[View Question](https://www.examtopics.com/discussions/databricks/view/52973-exam-az-500-topic-5-question-36-discussion/) You have an Azure subscription that contains an Azure SQL database named sql1. You plan to audit sql1. You need to configure the audit log destination. The solution must meet the following requirements: ✑ Support querying events by using the Kusto query language. ✑ Minimize administrative effort. What should you configure? A. an event hub B. a storage account C. a Log Analytics workspace
C. a Log Analytics workspace Log Analytics workspaces use the Kusto Query Language (KQL) for querying events, fulfilling the first requirement. They are also designed for ease of use and management, minimizing administrative effort, satisfying the second requirement. Why other options are incorrect: * **A. an event hub:** While event hubs can store audit logs, they don't natively support KQL querying. Additional tools and configurations would be required to query the data using KQL, increasing administrative effort. * **B. a storage account:** A storage account can store audit logs, but similar to event hubs, it doesn't natively support KQL querying. Extra steps would be needed to enable KQL-based querying, violating the "minimize administrative effort" requirement. Note: The discussion section overwhelmingly supports option C as the correct answer, with multiple users reporting its presence on the AZ-500 exam.
292
**** [View Question](https://www.examtopics.com/discussions/databricks/view/53332-exam-az-500-topic-3-question-6-discussion/) You need to prevent administrative users from accidentally deleting a virtual network named VNET1. The administrative users must be allowed to modify the settings of VNET1. To complete this task, sign in to the Azure portal. What is the best approach to achieve this? **
** The best approach is to apply a **Delete lock** to the VNET1 virtual network. **Explanation:** A Delete lock prevents deletion of the resource while still allowing modifications. This directly addresses the requirement of preventing accidental deletion while permitting administrative users to modify VNET1's settings. The discussion strongly supports this solution, with multiple users highlighting that a Delete lock specifically addresses preventing accidental deletion without restricting modification capabilities. **Why other options are incorrect:** * **Read-only lock:** A read-only lock would prevent both deletion *and* modification of VNET1, violating the requirement to allow modifications. * **IAM Role Assignment (e.g., Virtual Network Contributor):** While assigning a specific role like "Virtual Network Contributor" might seem like a solution, it doesn't inherently prevent accidental deletion. A user with this role could still inadvertently delete VNET1. It only controls permissions, not the ability to lock against accidental deletion. **Note:** There is some discussion on the most appropriate solution among the users providing insight. While a delete lock is the most directly applicable and widely supported answer based on the provided context, the discussion highlights that different approaches could be used depending on organizational policies and user requirements. The solution presented here best addresses the specific problem scenario.
293
**** [View Question](https://www.examtopics.com/discussions/databricks/view/53511-exam-az-500-topic-3-question-19-discussion/) You need to prevent administrators from performing accidental changes to the Homepage app service plan. To complete this task, sign in to the Azure portal. What steps should you take? **
** To prevent accidental changes to the Homepage app service plan, you should configure a read-only lock on the app service plan. This prevents any modifications to the plan without first removing the lock. The steps are: 1. In the Azure portal, locate the Homepage app service plan (either by searching for "App Service Plans" and selecting it, or via the left navigation pane). 2. Navigate to the app service plan's properties and find the "Locks" section. 3. Click "Add" to add a new lock. 4. Enter a name for the lock (the name is inconsequential). 5. Select "Read-only" as the Lock type. 6. Click "OK" to save the changes. **Why other options are incorrect:** While a "Delete" lock exists, it only prevents the deletion of the resource, not modifications. A read-only lock is the correct choice as it completely blocks all modifications, achieving the desired outcome of preventing accidental changes. There is some disagreement in the discussion regarding the use of a "Delete" lock; however, the consensus supports the use of a "Read-only" lock as the more effective solution in this scenario.
294
[View Question](https://www.examtopics.com/discussions/databricks/view/56298-exam-az-500-topic-2-question-47-discussion/) You have an Azure subscription named Subcription1 that contains an Azure Active Directory (Azure AD) tenant named contoso.com and a resource group named RG1. You create a custom role named Role1 for contoso.com. Where can you use Role1 for permission delegation? A. contoso.com only B. contoso.com and RG1 only C. contoso.com and Subscription1 only D. contoso.com, RG1, and Subscription1
A. contoso.com only Explanation: The question describes creating a custom role (Role1) within Azure Active Directory (Azure AD) for the contoso.com tenant. Azure AD roles are distinct from Azure Resource Manager (ARM) roles (RBAC) used for managing subscriptions, resource groups, and resources within those subscriptions. A custom Azure AD role applies only to the Azure AD tenant it's created for, in this case, contoso.com. It does not grant permissions at the subscription or resource group level. The discussion clearly points out this distinction, with multiple users confirming that Azure AD roles and Azure RBAC roles are separate and do not interoperate in the manner suggested by options B, C, and D. Why other options are incorrect: * **B. contoso.com and RG1 only:** Incorrect because Azure AD roles don't grant permissions to Azure resources (like resource groups). * **C. contoso.com and Subscription1 only:** Incorrect for the same reason as B. * **D. contoso.com, RG1, and Subscription1:** Incorrect because Azure AD roles are limited to the Azure AD tenant; they don't extend to subscriptions or resource groups. Note: While there's a user (shnz03) in the discussion who claims both Azure AD and Azure resources use an RBAC model, the overwhelming consensus and linked documentation support the conclusion that custom Azure AD roles are separate from and do not affect permissions within Azure subscriptions or resource groups.
295
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56303-exam-az-500-topic-1-question-10-discussion/) You have been tasked with creating a different subscription for each of your company's divisions. However, the subscriptions will be linked to a single Azure Active Directory (Azure AD) tenant. You want to make sure that each subscription has identical role assignments. You make use of Azure AD Privileged Identity Management (PIM). Select `No adjustment required` if the underlined segment is accurate. If the underlined segment is inaccurate, select the accurate option. A. No adjustment required B. Azure Blueprints C. Conditional access policies D. Azure DevOps **
** B. Azure Blueprints Azure Blueprints is the correct answer because it allows you to define and deploy consistent role assignments across multiple subscriptions. This directly addresses the requirement of having identical role assignments in each subscription. PIM (Azure AD Privileged Identity Management) is used for managing just-in-time access and role activation, not for consistently replicating role assignments across multiple subscriptions. **Why other options are incorrect:** * **A. No adjustment required:** This is incorrect because PIM alone is insufficient to ensure identical role assignments across multiple subscriptions. Additional tooling is needed for this task. * **C. Conditional access policies:** Conditional access policies control access based on conditions like location and device, not the consistent application of role assignments across subscriptions. * **D. Azure DevOps:** Azure DevOps is a platform for software development and doesn't directly manage Azure resource access or role assignments across subscriptions. **Note:** The discussion shows unanimous agreement on the correct answer.
296
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56305-exam-az-500-topic-1-question-9-discussion/) Your company recently created an Azure subscription. You have, subsequently, been tasked with making sure that you are able to secure Azure AD roles by making use of Azure Active Directory (Azure AD) Privileged Identity Management (PIM). Which of the following actions should you take FIRST? A. You should sign up Azure Active Directory (Azure AD) Privileged Identity Management (PIM) for Azure AD roles. B. You should consent to Azure Active Directory (Azure AD) Privileged Identity Management (PIM). C. You should discover privileged roles. D. You should discover resources. **
** C. You should discover privileged roles. **Explanation:** The discussion highlights that the previous requirement to "consent to PIM" is outdated. Therefore, options A and B are incorrect. Before assigning or managing privileged roles using PIM, you must first identify which roles are considered privileged within your Azure AD environment. Discovering these roles is the logical first step in securing them with PIM. Option D, discovering resources, is important for managing Azure roles, but identifying the privileged roles themselves takes precedence. **Why other options are incorrect:** * **A. You should sign up Azure Active Directory (Azure AD) Privileged Identity Management (PIM) for Azure AD roles:** This is not the first step. The signup process happens automatically once a privileged user accesses PIM (given the appropriate license). * **B. You should consent to Azure Active Directory (Azure AD) Privileged Identity Management (PIM):** This is outdated and no longer a necessary step. * **D. You should discover resources:** While important for securing Azure resources, discovering the *privileged roles* themselves is the critical initial step for PIM implementation. **Note:** The discussion indicates that the original question is outdated and the suggested answer might be inaccurate based on current Azure AD PIM functionality. The correct approach now involves automatic PIM activation for privileged users with Premium P2 licenses. The best first step would then be to identify the users with privileged access. However, based solely on the provided context, option C remains the *most plausible* answer given the original question's context.
297
[View Question](https://www.examtopics.com/discussions/databricks/view/56307-exam-az-500-topic-1-question-12-discussion/) Your company has an Azure Container Registry. You have been tasked with assigning a user a role that allows for the downloading of images from the Azure Container Registry. The role assigned should not require more privileges than necessary. Which of the following is the role you should assign? A. Reader B. Contributor C. AcrDelete D. AcrPull
The correct answer is **D. AcrPull**. The AcrPull role provides only the necessary permission to download (pull) images from an Azure Container Registry. The question explicitly states that the assigned role should have *only* the necessary privileges; AcrPull fulfills this requirement. Other options are incorrect because they grant excessive permissions: * **A. Reader:** This role grants broader access than just pulling images. While it might *include* the ability to pull images, it provides other unnecessary permissions as well, violating the principle of least privilege. * **B. Contributor:** This role grants even more extensive permissions than the Reader role, including the ability to modify and delete resources within the registry, far exceeding the requirement. * **C. AcrDelete:** This role is specifically for deleting images, not pulling them. There is some discussion among users regarding the Reader role, with some believing it also includes the ability to pull images. However, the consensus and the most secure approach prioritize the AcrPull role because it explicitly grants only the necessary permission to download images. This minimizes the risk of unintended actions by the user.
298
[View Question](https://www.examtopics.com/discussions/databricks/view/56312-exam-az-500-topic-1-question-16-discussion/) You make use of Azure Resource Manager templates to deploy Azure virtual machines. You have been tasked with making sure that Windows features that are not in use, are automatically inactivated when instances of the virtual machines are provisioned. Which of the following actions should you take? A. You should make use of Azure DevOps. B. You should make use of Azure Automation State Configuration. C. You should make use of network security groups (NSG). D. You should make use of Azure Blueprints.
The correct answer is **B. You should make use of Azure Automation State Configuration.** Azure Automation State Configuration (DSC) allows you to define and manage the desired state of your VMs, including enabling or disabling Windows features. This ensures that unwanted features are automatically deactivated during and after provisioning. Why other options are incorrect: * **A. Azure DevOps:** Azure DevOps is a platform for developing and deploying software, not for managing the configuration state of VMs. * **C. Network Security Groups (NSG):** NSGs control network traffic to and from VMs, not the configuration of Windows features. * **D. Azure Blueprints:** Azure Blueprints provide a governance solution for defining and managing the deployment of Azure resources, but not the specific configuration of individual features within those resources. Note: While the consensus points to B as the correct answer, it's important to note that several comments highlight the upcoming retirement of Azure Automation State Configuration on September 30, 2027. The suggested replacement is Azure Machine Configuration. Therefore, while B is currently the technically correct answer based on the provided context, future exams may reflect the change.
299
[View Question](https://www.examtopics.com/discussions/databricks/view/56315-exam-az-500-topic-1-question-17-discussion/) Your company's Azure subscription includes Windows Server 2016 Azure virtual machines. You are informed that every virtual machine must have a custom antimalware virtual machine extension installed. You are writing the necessary code for a policy that will help you achieve this. Which of the following is an effect that must be included in your code? A. Disabled B. Modify C. AuditIfNotExists D. DeployIfNotExists
The correct answer is **D. DeployIfNotExists**. The `DeployIfNotExists` effect is the only option that directly addresses the requirement of installing the custom antimalware extension on the VMs. This effect ensures that the extension is deployed if it's not already present. The policy will automatically install the extension on any VMs lacking it, fulfilling the stated requirement. Why other options are incorrect: * **A. Disabled:** This effect would disable the specified resource; it doesn't install anything. It's counter to the problem's goal of *adding* an antimalware extension. * **B. Modify:** This effect modifies an existing resource; it doesn't create a new one if one doesn't exist. This wouldn't install the extension on VMs that lack it. * **C. AuditIfNotExists:** This effect only audits for compliance. It reports whether the extension is missing but doesn't install it, failing to meet the requirement of ensuring the extension's presence on all VMs. There is a consensus among the discussion participants that the correct answer is D. No conflicting opinions are present.
300
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56325-exam-az-500-topic-2-question-41-discussion/) You have an Azure subscription named Subscription1 that contains the resources shown in the following table. [Table Image - Text not visible in provided URL] You create an Azure role by using the following JSON file. [JSON Image - Text not visible in provided URL] You assign Role1 to User1 for RG1. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 1. User1 can create a new virtual machine in RG1. 2. User1 can modify the properties of storage1. 3. User1 can attach the network interface of VM1 to VNET1. **
** Yes, No, No **Explanation:** The provided discussion analyzes the permissions granted by the custom role "Role1" assigned to User1 within Resource Group RG1. The JSON defining Role1 (which is not visible in the provided text) grants the `Microsoft.Compute/virtualMachines/write` action. This action allows the creation and modification of virtual machines. Therefore, User1 *can* create a new virtual machine in RG1. The discussion correctly points out that modifying storage account properties (`Microsoft.Storage/storageAccounts/write`) and attaching network interfaces require different permissions not included in Role1. Therefore, User1 cannot perform those actions. **Why other options are incorrect:** The options that suggest User1 *can* modify storage1 or attach the network interface are incorrect because the provided JSON defining Role1 (unavailable in the provided text) lacks the necessary permissions for these actions. The discussion's analysis supports this conclusion by referencing the required permissions, which are not included in Role1. **Note:** The exact content of the JSON file defining Role1 and the table showing the Azure resources are not provided. The answer relies on the discussion's interpretation of the unspecified JSON and its analysis of required permissions for VM creation, storage modification, and network interface attachment. There is a possibility for inaccuracy if the discussion misinterprets the JSON file contents.
301
[View Question](https://www.examtopics.com/discussions/databricks/view/56331-exam-az-500-topic-1-question-27-discussion/) You want to gather logs from a large number of Windows Server 2016 computers using Azure Log Analytics. You are configuring an Azure Resource Manager template to deploy the Microsoft Monitoring Agent to all the servers automatically. Which of the following should be included in the template? (Choose all that apply.) A. WorkspaceID B. AzureADApplicationID C. WorkspaceKey D. StorageAccountKey
A and C (WorkspaceID and WorkspaceKey) are correct. The Microsoft Monitoring Agent requires the Workspace ID to identify the correct Log Analytics workspace and the Workspace Key for authentication to send log data. The AzureADApplicationID and StorageAccountKey are not needed for this specific scenario of deploying the MMA to send logs to Log Analytics. Other Options: * **B. AzureADApplicationID:** This is incorrect. While Azure AD Application ID is used for authentication in some Azure scenarios, it's not directly required for the MMA to send logs to Log Analytics. The Workspace Key handles the authentication in this case. * **D. StorageAccountKey:** This is incorrect. Storage Account Keys are used to access Azure Storage, which is unrelated to sending logs directly to a Log Analytics workspace via the MMA. Note: The provided discussion indicates that this question has appeared in different exams and the suggested answer (AC) is consistent across those instances.
302
[View Question](https://www.examtopics.com/discussions/databricks/view/56341-exam-az-500-topic-1-question-31-discussion/) You have a sneaking suspicion that there are users trying to sign in to resources which are inaccessible to them. You decide to create an Azure Log Analytics query to confirm your suspicions. The query will detect unsuccessful user sign-in attempts from the last few days. You want to make sure that the results only show users who had failed to sign-in more than five times. Which of the following should be included in your query? A. The EventID and CountIf() parameters. B. The ActivityID and CountIf() parameters. C. The EventID and Count() parameters. D. The ActivityID and Count() parameters.
C. The EventID and Count() parameters. To identify users with more than five failed sign-in attempts, the query needs to filter for failed login events and then count them per user. `EventID` is used to filter for specific events (in this case, failed login attempts, such as EventID 4625 as shown in the provided Kusto query example). `Count()` aggregates the number of events per user. Why other options are incorrect: * **A. The EventID and CountIf() parameters:** While `EventID` is correct for filtering failed logins, `CountIf()` is not suitable here. `CountIf()` counts events based on a condition, but we need a simple count of all failed login attempts per user, which is what `Count()` provides. * **B. The ActivityID and CountIf() parameters:** `ActivityID` is not directly relevant to detecting failed login attempts. The example uses `Account` to identify users. Again, `CountIf()` is inappropriate. * **D. The ActivityID and Count() parameters:** Similar to option B, `ActivityID` is irrelevant for this purpose. Note: There's some minor disagreement in the discussion regarding the necessity of including the `arg_max` function to find the latest failed login. The highly-voted response includes it, while another suggests it's unnecessary given the time filter already in place. The core answer remains the same: `EventID` and `Count()` are essential for fulfilling the question's requirements.
303
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56347-exam-az-500-topic-2-question-57-discussion/) HOTSPOT - You have a management group named Group1 that contains an Azure subscription named sub1. Sub1 has a subscription ID of 11111111-1234-1234-1234-1111111111. You need to create a custom Azure role-based access control (RBAC) role that will delegate permissions to manage the tags on all the objects in Group1. What should you include in the role definition of Role1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0015100001.png) **
** The correct answer requires selecting `Microsoft.Resources` as the Resource Provider and `/providers/Microsoft.Management/managementGroups/Group1` as the Assignable Scope. * **Resource Provider: Microsoft.Resources:** This is correct because the `Microsoft.Resources` resource provider is responsible for managing tags in Azure. Any role allowing tag management must include actions related to this provider. * **Assignable Scope: `/providers/Microsoft.Management/managementGroups/Group1`:** This specifies that the custom role is available for assignment at the management group level, specifically within `Group1`. While the provided note mentions that assigning a custom RBAC role at the management group level is in preview, the question explicitly asks about managing tags within Group1, necessitating this scope. The discussion shows disagreement on this point, with some suggesting subscription level as the assignable scope. However, given the question's requirement to manage tags across *all* objects within Group1, the management group level scope is the more appropriate and comprehensive solution. The provided suggested answer image shows these selections as correct, though the note in the suggested answer indicates that management group-level assignment is in preview. This doesn't negate the correctness of the answer within the context of the question, as it explicitly asks about Group1. **Why other options are incorrect (implicit):** The question doesn't explicitly list other options, but implicitly, any other resource provider or assignable scope would be incorrect because they wouldn't grant the necessary permissions to manage tags within the specified management group. For example, choosing a resource provider not related to resource management would prevent tag manipulation; similarly, choosing a narrower scope (like a specific resource group within Group1) wouldn't allow tag management across all objects in Group1.
304
[View Question](https://www.examtopics.com/discussions/databricks/view/56354-exam-az-500-topic-1-question-41-discussion/) You are in the process of configuring an Azure policy via the Azure portal. Your policy will include an effect that will need a managed identity for it to be assigned. Which of the following is the effect in question? A. AuditIfNotExist B. Disabled C. DeployIfNotExist D. EnforceOPAConstraint
C. DeployIfNotExist The `DeployIfNotExist` effect in Azure Policy requires a managed identity to be assigned because it needs to deploy resources. The managed identity provides the necessary permissions to interact with Azure resources during deployment. Other options are incorrect because: * **A. AuditIfNotExist:** This effect only audits whether a resource exists; it doesn't require deployment or resource interaction. * **B. Disabled:** This effect simply disables a policy; no action requiring a managed identity is taken. * **D. EnforceOPAConstraint:** This effect enforces Open Policy Agent (OPA) constraints, but this doesn't inherently need a managed identity for assignment, though it might depending on the specific OPA policy. Note: The discussion section overwhelmingly supports answer C. There's no expressed disagreement regarding the correct answer in the provided text.
305
[View Question](https://www.examtopics.com/discussions/databricks/view/56380-exam-az-500-topic-1-question-42-discussion/) You have been tasked with creating an Azure key vault using PowerShell. You have been informed that objects deleted from the key vault must be kept for a set period of 90 days. Which two of the following parameters must be used in conjunction to meet the requirement? (Choose two.) A. EnabledForDeployment B. EnablePurgeProtection C. EnabledForTemplateDeployment D. EnableSoftDelete
B and D, EnablePurgeProtection and EnableSoftDelete. To maintain deleted objects in an Azure Key Vault for 90 days, both `EnableSoftDelete` and `EnablePurgeProtection` are necessary. `EnableSoftDelete` enables the soft-delete functionality, allowing recovery of deleted objects within the retention period. `EnablePurgeProtection` prevents the permanent deletion of these soft-deleted objects until the retention period expires. Why other options are incorrect: * **A. EnabledForDeployment:** This parameter is unrelated to the retention of deleted objects. It controls whether the key vault can be used for deployments. * **C. EnabledForTemplateDeployment:** Similar to A, this parameter is unrelated to object retention and is focused on deployment scenarios. Note: Discussion comments indicate that `EnableSoftDelete` might be enabled by default. This does not negate the necessity of `EnablePurgeProtection` to prevent permanent deletion before the retention period ends. The answer reflects the parameters required to fulfill the 90-day retention explicitly.
306
[View Question](https://www.examtopics.com/discussions/databricks/view/56387-exam-az-500-topic-1-question-19-discussion/) You have been tasked with enabling Advanced Threat Protection for an Azure SQL Database server. Advanced Threat Protection must be configured to identify all types of threat detection. Which of the following will happen if when a faulty SQL statement is generated in the database by an application? A. A Potential SQL injection alert is triggered. B. A Vulnerability to SQL injection alert is triggered. C. An Access from a potentially harmful application alert is triggered. D. A Brute force SQL credentials alert is triggered.
B. A Vulnerability to SQL injection alert is triggered. A faulty SQL statement generated by an application is a strong indicator of a potential SQL injection vulnerability. Advanced Threat Protection in Azure SQL Database would flag this as a vulnerability, not just a potential attack. The discussion highlights that a faulty SQL statement might be due to a coding defect or lack of input sanitization, both of which represent vulnerabilities exploitable for SQL injection. Why other options are incorrect: * **A. A Potential SQL injection alert is triggered:** While a faulty SQL statement *could* be exploited for an SQL injection attack, the alert wouldn't necessarily classify it as a "potential" attack. The alert would be for the *vulnerability* itself, which is the underlying issue. * **C. An Access from a potentially harmful application alert is triggered:** This is less specific and doesn't directly address the core issue of the faulty SQL statement representing a vulnerability. * **D. A Brute force SQL credentials alert is triggered:** This is unrelated to a faulty SQL statement; it involves repeated failed login attempts. Note: There is some disagreement in the provided discussion regarding the correct answer (A vs B). The answer provided above reflects the reasoning presented by the highly upvoted responses focusing on the underlying vulnerability rather than a potential attack.
307
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56392-exam-az-500-topic-4-question-26-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains the users shown in the following table. | User Name | Role | |--------------|-----------------| | Admin1 | Global Admin | | Admin2 | User Admin | | Admin3 | User Admin | | User1 | User | | User2 | User | Contoso.com contains a group naming policy. The policy has a custom blocked word list rule that includes the word Contoso. Which users can create a group named Contoso Sales in contoso.com? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** Admin1 and Admin3 can create the group named "Contoso Sales". **Explanation:** The provided documentation states that Global Administrators and User Administrators are exempt from group naming policies. Admin1 is a Global Administrator, and Admin3 is a User Administrator; therefore, they are not subject to the naming restrictions imposed by the policy that blocks the word "Contoso". Admin2 is also a User Admin, but this is not reflected in the suggested answer. The discussion shows some disagreement on whether the policy applies only to O365, but the prevailing and supported answer aligns with the Microsoft documentation cited, which clarifies that the exemption applies across all group workloads and endpoints. **Why other options are incorrect:** * **Admin2:** While a User Administrator, the suggested answer only indicates Admin1 and Admin3 as able to create the group. The discussion does not offer a conclusive resolution to this discrepancy. * **User1 and User2:** Regular users are subject to the group naming policies and cannot create groups with blocked words like "Contoso". **Note:** There is some disagreement in the discussion regarding the scope of the group naming policy (whether it applies only to O365). However, the accepted answer and supporting documentation indicate that the exemption for Global and User Administrators applies across all group workloads.
308
[View Question](https://www.examtopics.com/discussions/databricks/view/56419-exam-az-500-topic-4-question-74-discussion/) HOTSPOT - You are configuring just in time (JIT) VM access to a Windows Server 2019 Azure virtual machine. You need to grant users PowerShell access to the virtual machine by using JIT VM access. What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0042700001.jpg) *(Image not provided, but assumed to be a multiple choice selection screen)*
The provided text does not show the options presented in the "Hot Area" image, so a precise answer cannot be given. However, based on the discussion, the correct configuration would involve selecting the appropriate port (likely port 5986 for PowerShell remoting) and ensuring that the access level allows "Read" permissions at a minimum. The user "jpons" highlights that "Port is ok, but access is Read", implying that both port selection and permission level are crucial for the correct configuration. The discussion notes that additional permissions may be necessary, but "Read" is a stated minimum. Why other options are incorrect: Without the options from the "Hot Area" image, it's impossible to definitively state why other options are incorrect. However, options that did not include the correct port for PowerShell remoting (likely 5986) and that did not grant at least read access would be incorrect. Options granting excessive permissions may also be incorrect depending on the context of the exam. Note: The answer provided is inferred based on limited information. The actual question and answer choices are not fully shown, which limits the ability to give a definitive and complete answer. The discussion also seems to indicate some possible disagreement or nuances regarding the exact permissions required.
309
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56524-exam-az-500-topic-4-question-61-discussion/) You have an Azure subscription named Subscription1 that contains the resources shown in the following table. | Resource | Resource Group | Location | |-----------------|-----------------|----------| | NVA1 | RG1 | East US | | Linux VM | RG1 | East US | You have an Azure subscription named Subscription2 that contains the following resources: * An Azure Sentinel workspace * An Azure Event Grid instance You need to ingest the CEF messages from the NVA1 to Azure Sentinel. What should you configure for each subscription? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** Subscription1: An Azure Log Analytics agent on the Linux virtual machine. Subscription2: A new Azure Sentinel Common Event Format (CEF) data connector. **Explanation:** To ingest CEF messages from NVA1 (a network virtual appliance) into Azure Sentinel, a two-step process is required: 1. **Forwarding:** The CEF messages need to be collected from NVA1. A Linux VM in Subscription1, equipped with a Log Analytics agent, acts as a forwarder. The agent collects the CEF logs from NVA1. The discussion indicates that while using a Linux VM as a forwarder is a legacy approach, it's still a valid method to address the problem. The newer recommended approach, using Azure Arc, is not explicitly stated as a requirement in the question. 2. **Ingestion into Sentinel:** The collected CEF logs need to be ingested into the Azure Sentinel workspace in Subscription2. This is achieved by configuring a new Azure Sentinel data connector specifically designed for Common Event Format (CEF). **Why other options are incorrect (based on the provided context):** The question specifically asks about ingesting CEF messages. While other data connectors might exist, only the CEF connector directly addresses the problem. The discussion highlights the use of the Log Analytics agent for forwarding. While Azure Arc is mentioned as a better modern approach, it's not explicitly part of the presented problem or question. **Note:** The discussion reflects some disagreement on the best practices for this scenario, with some favoring the newer Azure Arc approach and others using the Linux VM with Log Analytics agent. The answer above reflects a valid solution based on the information and options presented in the original question.
310
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56555-exam-az-500-topic-3-question-34-discussion/) HOTSPOT - You have an Azure key vault. You need to delegate administrative access to the key vault to meet the following requirements: ✑ Provide a user named User1 with the ability to set advanced access policies for the key vault. ✑ Provide a user named User2 with the ability to add and delete certificates in the key vault. ✑ Use the principle of least privilege. What should you use to assign access to each user? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0027300004.png) *(This image shows a hotspot area for selecting answers for User1 and User2)* **
** User1: **RBAC** User2: **Key Vault Access Policy** **Explanation:** The suggested answer reflects the principle of least privilege and the historical context of Azure Key Vault access management. While RBAC *can* now manage both the management and data planes of Key Vault, at the time the original question was created (and potentially still reflected in some exam versions), the recommended approach was to use RBAC for management plane operations (like setting access policies, which User1 needs) and Key Vault access policies for data plane operations (managing certificates, which User2 needs). This granular approach is crucial for minimizing the scope of permissions granted to each user. The discussion shows significant disagreement and evolution in best practices. Many commenters correctly note that using RBAC for both planes is now a more modern and often preferred approach, offering greater flexibility and potentially improved security management. However, the original question and suggested answer are based on a previous understanding of Azure Key Vault access control, and there's no definitive evidence the question has been updated in all exam versions. Therefore, the answer provided is based on the most likely response expected within the context of the original question and its associated images. **Why other options are incorrect (based on the original context):** Using only RBAC for both users would grant User2 more permissions than necessary (violating the principle of least privilege). Conversely, using only Key Vault access policies for both would lack the flexibility to manage high-level access policies required by User1. The original context of the question suggests a transition period where the accepted best practice was a mixed approach. The later arguments for entirely using RBAC are not directly reflected in the original question and suggested answer.
311
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56556-exam-az-500-topic-3-question-36-discussion/) You have an Azure Container Registry named Registry1. From Azure Security Center, you enable Azure Container Registry vulnerability scanning of the images in Registry1. You perform the following actions: ✑ Push a Windows image named Image1 to Registry1. ✑ Push a Linux image named Image2 to Registry1. ✑ Push a Windows image named Image3 to Registry1. ✑ Modify Image1 and push the new image as Image4 to Registry1. Modify Image2 and push the new image as Image5 to Registry1. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0027500005.png) Which two images will be scanned for vulnerabilities? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Image4 B. Image2 C. Image1 D. Image3 E. Image5 **
** B and E (Image2 and Image5) **Explanation:** The question asks which images will be scanned *after* the vulnerability scanning is enabled. Image2 (Linux) and Image5 (modified Linux - Image2) will be scanned because vulnerability scanning is enabled *after* they are pushed. Image1, Image3, and Image4 (all Windows) are subject to debate due to conflicting information regarding Windows image support within the discussion. While some users claim the answer is B and E because Windows images were not initially supported, other comments and provided links suggest that support for Windows images has since been added. **Why other options are incorrect:** * **A (Image4):** This is a modified version of Image1 (Windows), and the timing of the vulnerability scan relative to the push of Image4 is unclear based on the question and the conflicting information about the scan's Windows image support. * **C (Image1):** This image was pushed before vulnerability scanning was enabled. * **D (Image3):** This image was pushed before vulnerability scanning was enabled. **Note:** There is a significant disagreement within the discussion regarding whether Azure Container Registry vulnerability scanning supports Windows images. The answer provided reflects the initial understanding in the provided context, where the suggested answer is B and E and several comments support this, but acknowledges the conflicting information provided in subsequent comments that suggest support has since been added for Windows images. A definitive answer requires clarification on the specific version and configuration of Azure Container Registry and Azure Security Center used for the scan.
312
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56557-exam-az-500-topic-3-question-43-discussion/) You have an Azure subscription that contains an Azure Container Registry named Registry1. Microsoft Defender for Cloud is enabled in the subscription. You upload several container images to Registry1. You discover that vulnerability security scans were not performed. You need to ensure that the container images are scanned for vulnerabilities when they are uploaded to Registry1. What should you do? A. From the Azure portal, modify the Pricing tier settings. B. From Azure CLI, lock the container images. C. Upload the container images by using AzCopy. D. Push the container images to Registry1 by using Docker. **
** A. From the Azure portal, modify the Pricing tier settings. Vulnerability scanning in Azure Container Registry is tied to the pricing tier. The basic tier does not include vulnerability scanning; higher tiers (like Standard) do. To enable vulnerability scanning, you must change the pricing tier to one that supports it. **Why other options are incorrect:** * **B. From Azure CLI, lock the container images:** Locking container images does not initiate vulnerability scans. Locking controls access and prevents unauthorized modifications. * **C. Upload the container images by using AzCopy:** AzCopy is a tool for transferring data; it doesn't inherently perform vulnerability scanning. The method of uploading the image does not change the scanning behavior. * **D. Push the container images to Registry1 by using Docker:** Using Docker to push images is the standard method, but it doesn't automatically trigger vulnerability scanning if the registry's pricing tier doesn't support it. **Note:** The discussion suggests that the original question might have been incomplete, implying that the subscription uses the Standard tier. However, based solely on the provided text, the answer remains A. The incompleteness of the original question might affect the accuracy of the assessment.
313
[View Question](https://www.examtopics.com/discussions/databricks/view/56746-exam-az-500-topic-4-question-60-discussion/) You have an Azure subscription that contains a resource group named RG1 and a security group named ServerAdmins. RG1 contains 10 virtual machines, a virtual network named VNET1, and a network security group (NSG) named NSG1. ServerAdmins can access the virtual machines by using RDP. You need to ensure that NSG1 only allows RDP connections to the virtual machines for a maximum of 60 minutes when a member of ServerAdmins requests access. What should you configure? A. an Azure policy assigned to RG1 B. a just in time (JIT) VM access policy in Microsoft Defender for Cloud C. an Azure Active Directory (Azure AD) Privileged Identity Management (PIM) role assignment D. an Azure Bastion host on VNET1
B. a just in time (JIT) VM access policy in Microsoft Defender for Cloud The correct answer is B because Just-in-Time (JIT) VM access is designed to provide temporary, controlled access to virtual machines. It allows you to restrict inbound traffic by default and only open necessary ports (like RDP) for a specified duration (e.g., 60 minutes) when a user requests access. This directly addresses the requirement of limiting RDP access to a 60-minute window upon request by members of the ServerAdmins group. Why other options are incorrect: * **A. an Azure policy assigned to RG1:** Azure Policy is used for governance and compliance, managing resource configurations. While it can indirectly influence security, it doesn't directly provide the time-limited access control required by the question. * **C. an Azure Active Directory (Azure AD) Privileged Identity Management (PIM) role assignment:** Azure AD PIM manages access to roles and permissions within Azure AD itself, not direct network access to VMs. It doesn't control network traffic. * **D. an Azure Bastion host on VNET1:** Azure Bastion provides secure RDP access through a managed service, but it doesn't inherently enforce time-limited access. The discussion shows unanimous agreement on answer B.
314
[View Question](https://www.examtopics.com/discussions/databricks/view/56772-exam-az-500-topic-1-question-7-discussion/) You have been tasked with applying conditional access policies for your company's current Azure Active Directory (Azure AD). The process involves assessing the risk events and risk levels. Which of the following is the risk level that should be configured for sign-ins that originate from IP addresses with dubious activity? A. None B. Low C. Medium D. High
C. Medium The most recent and widely supported answer in the discussion is C, Medium. While some older discussions suggest a risk level of "Low," updates to Azure AD Identity Protection have changed the classification to "Medium" for sign-ins originating from IP addresses with dubious activity. This is supported by multiple users and references to updated Microsoft documentation. Other Options: * **A. None:** This is incorrect as Azure AD Identity Protection actively assesses risk levels for sign-ins. Ignoring risk from suspicious IP addresses would be a security vulnerability. * **B. Low:** Although some older sources claim "Low" was the correct answer, the consensus in the discussion is that this has been superseded by updates in Azure AD Identity Protection, making it outdated. * **D. High:** This is too high a risk level for simply a sign-in from a suspicious IP address. While such sign-ins warrant caution, a "High" risk designation is usually reserved for more serious indicators of compromise. Note: There is disagreement in the discussion regarding the correct risk level due to updates to Azure AD over time. The answer provided reflects the most current and widely accepted information.
315
[View Question](https://www.examtopics.com/discussions/databricks/view/56773-exam-az-500-topic-1-question-3-discussion/) Your company has an Active Directory forest with a single domain, named weylandindustries.com. They also have an Azure Active Directory (Azure AD) tenant with the same name. You have been tasked with integrating Active Directory and the Azure AD tenant. You intend to deploy Azure AD Connect. Your strategy for the integration must make sure that password policies and user logon limitations affect user accounts that are synced to the Azure AD tenant, and that the amount of necessary servers are reduced. Solution: You recommend the use of federation with Active Directory Federation Services (AD FS). Does the solution meet the goal? A. Yes B. No
B. No The solution of using federation with Active Directory Federation Services (AD FS) does not meet the goal. While federation does ensure that password policies and user logon limitations from Active Directory are applied to synced Azure AD accounts, it significantly increases the number of servers required. AD FS requires at least two servers for high availability, along with additional web application proxies, contradicting the requirement to reduce the number of necessary servers. The discussion highlights this point as the primary reason why federation is not the optimal solution in this scenario. Several users in the discussion agree that a different approach, such as Pass-through Authentication (PTA), would better fulfill both requirements. There is some disagreement within the discussion regarding the best solution to meet *both* requirements. While federation does achieve the password policy synchronization, its increased server requirements outweigh that benefit. The consensus leans toward Pass-through Authentication as a better fit for the stated goals.
316
[View Question](https://www.examtopics.com/discussions/databricks/view/56777-exam-az-500-topic-1-question-11-discussion/) Your company has an Azure Container Registry. You have been tasked with assigning a user a role that allows for the uploading of images to the Azure Container Registry. The role assigned should not require more privileges than necessary. Which of the following is the role you should assign? A. Owner B. Contributor C. AcrPush D. AcrPull
C. AcrPush The AcrPush role grants the minimum necessary permissions to upload (push) images to an Azure Container Registry. This adheres to the principle of least privilege, ensuring the user only has the permissions required for their task and no more. Why other options are incorrect: * **A. Owner:** This role provides excessive permissions, granting full control over the container registry, far beyond the simple need to upload images. * **B. Contributor:** Similar to the Owner role, the Contributor role grants too many privileges, exceeding the minimum necessary for uploading images. * **D. AcrPull:** This role only allows downloading (pulling) images, not uploading them, making it unsuitable for the task. Note: The discussion unanimously agrees that AcrPush is the correct answer, citing its specific functionality and alignment with the principle of least privilege.
317
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56779-exam-az-500-topic-1-question-21-discussion/) Your company has an Azure subscription that includes two virtual machines, named VirMac1 and VirMac2, which both have a status of Stopped (Deallocated). The virtual machines belong to different resource groups, named ResGroup1 and ResGroup2. You have also created two Azure policies that are both configured with the virtualMachines resource type. The policy configured for ResGroup1 has a policy definition of Not allowed resource types, while the policy configured for ResGroup2 has a policy definition of Allowed resource types. You then create a Read-only resource lock on VirMac1, as well as a Read-only resource lock on ResGroup2. Which of the following is TRUE with regards to the scenario? (Choose all that apply.) A. You will be able to start VirMac1. B. You will NOT be able to start VirMac1. C. You will be able to create a virtual machine in ResGroup2. D. You will NOT be able to create a virtual machine in ResGroup2. **
** B and D A read-only lock on ResGroup2 prevents the creation of any new resources within that resource group, including virtual machines. Therefore, you will NOT be able to create a virtual machine in ResGroup2 (option D). Similarly, a read-only lock on VirMac1 prevents any modifications to the virtual machine, including starting it. Therefore, you will NOT be able to start VirMac1 (option B). **Why other options are incorrect:** * **A:** Incorrect. A read-only lock on VirMac1 prevents starting the virtual machine. * **C:** Incorrect. A read-only lock on ResGroup2 prevents creating any new resources within it. **Note:** There is some disagreement in the discussion regarding the impact of the resource group lock versus the VM lock. Some users focus solely on the read-only lock on the VM, while others correctly identify that the read-only lock on the resource group is the determining factor for creating new VMs. The provided answer reflects the correct understanding that the resource group lock supersedes the VM lock in this scenario.
318
[View Question](https://www.examtopics.com/discussions/databricks/view/56785-exam-az-500-topic-1-question-40-discussion/) You need to consider the underlined segment to establish whether it is accurate. Your Azure Active Directory Azure (Azure AD) tenant has an Azure subscription linked to it. Your developer has created a mobile application that obtains Azure AD access tokens using the OAuth 2 implicit grant type. The mobile application must be registered in Azure AD. You require a redirect URI from the developer for registration purposes. Select `No adjustment required` if the underlined segment is accurate. If the underlined segment is inaccurate, select the accurate option. A. No adjustment required B. a secret C. a login hint D. a client ID
A. No adjustment required Explanation: The statement "You require a redirect URI from the developer for registration purposes" is accurate. A redirect URI is necessary when registering a mobile application in Azure AD that uses the OAuth 2 implicit grant flow. This URI specifies where Azure AD should redirect the user after successful authentication. While a client ID is also required for registration, the question specifically asks about what is required *from the developer*, and the client ID is generated by Azure AD during the registration process, not provided by the developer. Why other options are incorrect: * **B. a secret:** While secrets are used in other OAuth flows, they are not directly required from the developer for registering a mobile application using the implicit grant type. * **C. a login hint:** A login hint helps pre-fill the username field during sign-in but is not essential for registration itself. * **D. a client ID:** A client ID is required for the application to function, but it's generated during the registration process by Azure AD, not provided by the developer. Note: There is some disagreement in the discussion. Some users believe that a redirect URI is only recommended, not required, referencing Microsoft documentation. However, the prevailing opinion and the suggested answer support the accuracy of the underlined statement requiring a redirect URI from the developer.
319
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56801-exam-az-500-topic-8-question-2-discussion/) HOTSPOT - You need to create Role1 to meet the platform protection requirements. How should you complete the role definition of Role1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0045500001.jpg) *(Image not displayed here, but contains fields for Resource Provider, Actions, and Scope)* Scenario: A new custom RBAC role named Role1 must be used to delegate the administration of the managed disks in RG1. Role1 must be available only for RG1. **
** The correct selections for Role1 are: * **Resource Provider:** `Microsoft.Compute` * **Actions:** `*/disks/*` * **Scope:** `/subscriptions/{subscriptionId}/resourceGroups/{Resource Group Id}` **Explanation:** The scenario requires creating a role to manage *managed* disks within a specific resource group (RG1). Managed disks are a feature of Azure Compute, not Azure Storage. Therefore, the resource provider must be `Microsoft.Compute`. The `*/disks/*` action covers all operations on disks. Finally, the scope limits the role's applicability to the specified resource group, ensuring that Role1 only applies to RG1's managed disks. The use of placeholders `{subscriptionId}` and `{Resource Group Id}` is correct for dynamically assigning the role's scope. **Why other options are incorrect:** * **Microsoft.Storage:** This is incorrect because managed disks are managed by Azure Compute, not Azure Storage. Azure Storage is for *unmanaged* disks. The discussion confirms this point. * Any incomplete or incorrect actions selection would not provide the required access to manage disks. * A scope that is too broad (e.g., subscription level) would violate the requirement that Role1 only be available for RG1. A missing or incorrect scope would fail to restrict access appropriately. **Note:** The discussion shows some disagreement regarding the specific details of the answer. However, the consensus and the clarifying statement from heatfan900 strongly support the above solution as being accurate regarding managed disks in Azure.
320
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56811-exam-az-500-topic-8-question-3-discussion/) DRAG DROP - You need to configure SQLDB1 to meet the data and application requirements. Which three actions should you recommend be performed in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0045700001.png) **
** The correct sequence of actions, based on the provided images and discussion, is debated. However, a strong case can be made for the following order: 1. **From the Azure portal, create an Azure AD administrator for LitwareSQLServer1.** This establishes an initial administrative connection point. 2. **Connect to SQLDB1 by using Microsoft SQL Server Management Studio (SSMS).** This is necessary to perform database-level administration tasks. 3. **In SQLDB1, create contained database users.** This step requires a connection to the database, making it dependent on step 2. **Why other options are incorrect or debated:** The discussion reveals disagreement about the exact sequence, particularly regarding whether connecting to SQLDB1 via SSMS (step 2) needs to happen before or after creating contained database users (step 3). Some argue that creating contained database users requires a prior connection, while others suggest it's possible without direct SSMS connection. There's no definitive consensus within the provided discussion to definitively say which order is correct. The answer provided prioritizes the generally accepted need for a connection to manage the database before performing database-specific tasks.
321
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56859-exam-az-500-topic-5-question-44-discussion/) You have an Azure subscription that contains the storage accounts shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0052100001.png) You enable Azure Defender for Storage. Which storage services of storage5 are monitored by Azure Defender for Storage, and which storage accounts are protected by Azure Defender for Storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** The correct answer is that Azure Defender for Storage monitors File service and Data Lake Storage in storage5, and protects storage1, storage2, and storage5. **Explanation:** Based on the provided documentation and discussion, Azure Defender for Storage supports Blob Storage (including Data Lake Gen2), Azure Files, and implicitly general purpose v2 accounts. * **Storage Services Monitored (storage5):** Storage account 5 is a General Purpose v2 account. While the question asks which *services* are monitored within storage5, the General Purpose v2 account type encompasses both File and Data Lake Storage services. Therefore, both services under storage5 would be monitored. * **Storage Accounts Protected:** Storage accounts 1 (Blob Storage), 2 (Block Blob Storage), and 5 (General Purpose v2) are supported account types for Azure Defender for Storage. Storage accounts 3 and 4 are not supported. **Why other options are incorrect:** The discussion shows some disagreement about the specific storage accounts protected. Some users incorrectly included storage4, which is not a supported account type. Other incorrect options would exclude any of storage1, storage2, and storage5, or incorrectly include storage3 or storage4. The discrepancy reflects differing interpretations of the provided information regarding which account types Azure Defender for Storage supports. The answer provided above aligns with the most widely upvoted and seemingly accurate interpretation.
322
[View Question](https://www.examtopics.com/discussions/databricks/view/56961-exam-az-500-topic-1-question-24-discussion/) You have an Azure virtual machine that runs Windows Server R2. You plan to deploy and configure an Azure Key vault, and enable Azure Disk Encryption for the virtual machine. Which of the following is TRUE with regards to Azure Disk Encryption for a Windows VM? A. It is supported for basic tier VMs. B. It is supported for standard tier VMs. C. It is supported for VMs configured with software-based RAID systems. D. It is supported for VMs configured with Storage Spaces Direct (S2D).
B. It is supported for standard tier VMs. Azure Disk Encryption is supported for standard tier VMs running Windows Server. The discussion explicitly states that it is *not* supported for basic tier VMs, VMs with software-based RAID, or those using Storage Spaces Direct (S2D). Therefore, only option B is correct. WHY OTHER OPTIONS ARE INCORRECT: * **A. It is supported for basic tier VMs:** Incorrect. The discussion explicitly mentions that Azure Disk Encryption does *not* support basic tier VMs. * **C. It is supported for VMs configured with software-based RAID systems:** Incorrect. The discussion explicitly states that Azure Disk Encryption does *not* support VMs configured with software-based RAID systems. * **D. It is supported for VMs configured with Storage Spaces Direct (S2D):** Incorrect. The discussion explicitly states that Azure Disk Encryption does *not* support VMs configured with Storage Spaces Direct (S2D). Note: The discussion shows a consensus that option B is the correct answer.
323
[View Question](https://www.examtopics.com/discussions/databricks/view/57088-exam-az-500-topic-8-question-1-discussion/) You need to configure WebApp1 to meet the data and application requirements. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Upload a public certificate. B. Turn on the HTTPS Only protocol setting. C. Set the Minimum TLS Version protocol setting to 1.2. D. Change the pricing tier of the App Service plan. E. Turn on the Incoming client certificates protocol setting.
A and E To enforce mutual authentication for WebApp1, you need to: A. **Upload a public certificate:** Mutual authentication requires the server (WebApp1) to present a certificate to the client. Uploading a public certificate fulfills the server-side requirement. E. **Turn on the Incoming client certificates protocol setting:** This allows WebApp1 to request and validate client certificates, which is essential for the client-side of mutual authentication. The setting should be configured to "Require" client certificates for true mutual authentication, as noted in the discussion. Why other options are incorrect: * **B. Turn on the HTTPS Only protocol setting:** While HTTPS is necessary for secure communication, it doesn't directly enforce *mutual* authentication. It only ensures that all communication happens over HTTPS, not that client certificates are used. * **C. Set the Minimum TLS Version protocol setting to 1.2:** Improving security by setting a minimum TLS version is a good practice but doesn't directly relate to mutual authentication. * **D. Change the pricing tier of the App Service plan:** This is irrelevant to the authentication requirements. Note: The provided discussion shows some disagreement on the precise configuration of option E ("Incoming client certificates"). The suggested answer and the explanation highlight that the setting should be set to "Require" client certificates for full mutual authentication.
324
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57175-exam-az-500-topic-2-question-24-discussion/) You create a new Azure subscription that is associated with a new Azure Active Directory (Azure AD) tenant. You create one active conditional access policy named Portal Policy. Portal Policy is used to provide access to the Microsoft Azure Management cloud app. The Conditions settings for Portal Policy are configured as shown in the Conditions exhibit. (See image below). The Grant settings for Portal Policy are configured as shown in the Grant exhibit. (See image below). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. **Conditions Exhibit Image:** (Image depicting Conditional Access policy conditions, showing "Locations" set to "Contoso" and other relevant settings. The exact content is not provided, but the description implies it includes location-based settings.) **Grant Exhibit Image:** (Image depicting Conditional Access policy grant controls, showing "Grant access" and "Require multi-factor authentication" being checked. The exact content is not provided but the description states MFA is required). **Hot Area Image:** (Image showing three boxes, Box 1: "Users in the Contoso location are required to use multi-factor authentication (MFA) to access the Azure portal.", Box 2: "Users in the Contoso location are required to use MFA to access web services hosted in Azure.", Box 3: "Users external to the Contoso location are required to use MFA to access the Azure portal.") **
** Box 1: Yes - The policy explicitly requires MFA for users in the Contoso location accessing the Azure portal. Box 2: No - The policy applies only to the Azure portal and Azure management endpoints; it does not cover web services hosted in Azure. Box 3: No - The policy only applies to users within the specified Contoso location; users external to this location are not affected by this policy. **Explanation:** The provided images (not directly available here, but described in the question) show that the conditional access policy "Portal Policy" is configured to apply only to users located in "Contoso" and requires multi-factor authentication (MFA) for access to Azure management resources (Azure portal). The policy's scope is specifically defined by the "Locations" condition. External users and access to web services hosted within Azure are outside the policy's scope, hence the "No" answers for boxes 2 and 3. Box 1 directly reflects the policy configuration. **Why other options are incorrect:** The answers align with the policy's explicit configuration. The "No" answers for boxes 2 and 3 correctly identify that the policy's scope is limited to the specified location and to the Azure portal specifically. There is no indication in the provided description that the policy applies to web services hosted within Azure or to users outside the "Contoso" location. **Note:** The discussion section shows agreement among users regarding the correct answers.
325
[View Question](https://www.examtopics.com/discussions/databricks/view/57259-exam-az-500-topic-1-question-8-discussion/) You have been tasked with configuring an access review, which you plan to assigned to a new collection of reviews. You also have to make sure that the reviews can be reviewed by resource owners. You start by creating an access review program and an access review control. You now need to configure the Reviewers. Which of the following should you set Reviewers to? A. Selected users. B. Members (Self). C. Group Owners. D. Anyone.
C. Group Owners The suggested answer is C, Group Owners. The rationale provided is that to ensure resource owners can review access, selecting "Group Owners" as the reviewer type allows owners of resources to directly review and manage access. However, there is significant disagreement in the discussion. Several commenters argue that the correct answer should be A, Selected users, because resource owners vary across different resources and selecting them individually provides more control. Others incorrectly assert that using an access review program precludes the use of ARM access reviews and therefore implies a group-based solution. The question itself is ambiguous regarding the type of resource and the relationship between resource ownership and group membership, creating the source of the conflicting opinions. Therefore, the provided answer is the suggested answer from the original source, but it is important to note this lack of consensus.
326
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57541-exam-az-500-topic-1-question-20-discussion/) You are in the process of creating an Azure Kubernetes Service (AKS) cluster. The Azure Kubernetes Service (AKS) cluster must be able to connect to an Azure Container Registry. You want to make sure that Azure Kubernetes Service (AKS) cluster authenticates to the Azure Container Registry by making use of the auto-generated service principal. Solution: You create an Azure Active Directory (Azure AD) role assignment. Does the solution meet the goal? A. Yes B. No **
** B. No The solution does not meet the goal. While creating an Azure AD role assignment *might* seem like a solution, the authentication between AKS and ACR using the auto-generated service principal is typically handled automatically. No manual configuration, including Azure AD role assignments, is usually necessary. The AKS cluster is automatically granted the necessary permissions to pull images from the ACR. Therefore, creating an additional Azure AD role assignment is unnecessary and might even be counterproductive. The discussion highlights some disagreement on the exact reason. Some participants believe the answer is 'No' because an RBAC role, rather than an AAD role, is needed. Others state that the authentication happens automatically without any manual role creation, which is the correct interpretation based on Azure's typical behavior. **Why other options are incorrect:** A. Yes: This is incorrect because creating an Azure AD role assignment is not necessary for AKS to authenticate with ACR using the auto-generated service principal. The authentication is usually automatic.
327
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57545-exam-az-500-topic-1-question-23-discussion/) You have been tasked with delegating administrative access to your company's Azure key vault. You have to make sure that a specific user is able to add and delete certificates in the key vault. You also have to make sure that access is assigned based on the principle of least privilege. Which of the following options should you use to achieve your goal? A. A key vault access policy B. Azure policy C. Azure AD Privileged Identity Management (PIM) D. Azure DevOps **
** A. A key vault access policy A key vault access policy allows granular control over access to specific resources within an Azure Key Vault, such as certificates. By assigning a policy that only permits adding and deleting certificates, you adhere to the principle of least privilege, granting only the necessary permissions. This is different from granting broader roles which might allow other unintended actions. **Why other options are incorrect:** * **B. Azure policy:** Azure Policy manages governance and compliance across Azure resources. It doesn't directly control access to individual Key Vault resources. * **C. Azure AD Privileged Identity Management (PIM):** PIM manages privileged accounts and access, typically for administrative roles. While it can be used in conjunction with Key Vault, it's not the direct mechanism for granting specific permissions to manage certificates. * **D. Azure DevOps:** Azure DevOps is for managing software development workflows; it's not related to managing Key Vault access. **Note:** While the suggested answer is overwhelmingly A, there's some discussion about using built-in roles (like "Key Vault Certificates Officer"). However, the question emphasizes the need for *least privilege*, and a custom access policy provides more precise control than a pre-defined role.
328
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57613-exam-az-500-topic-2-question-13-discussion/) You have an Azure Container Registry named Registry1. You add role assignments for Registry1 as shown in the following table. | User | Role Assignment | |---|---| | User1 | Owner | | User2 | AcrPull | | User3 | AcrImageSigner | | User4 | Contributor | Which users can upload images to Registry1 and download images from Registry1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** **Box 1: Users who can upload images to Registry1: User1 and User4 only.** Owners and Contributors have permissions to upload images. AcrPush also allows uploading. **Box 2: Users who can download images from Registry1: User1, User2, and User4.** Owners, Contributors, and those with the AcrPull role can download images. AcrImageSigner only allows image signing. **Explanation:** The Microsoft documentation on Azure Container Registry roles clarifies the permissions granted by each role. The Owner role provides full control, the Contributor role allows upload and download, AcrPush allows only uploading, AcrPull only allows downloading, and AcrImageSigner only allows signing images. Therefore, only User1 (Owner) and User4 (Contributor) can upload, while User1 (Owner), User2 (AcrPull), and User4 (Contributor) can download. **Why other options are incorrect:** The discussion shows some disagreement about the precise users, but the consensus aligns with the provided answer based on the standard Microsoft documentation for Azure Container Registry roles. Options suggesting only User2 or User3 having upload/download capabilities are incorrect as per the documented roles. Similarly, excluding User1 or User4 from either upload or download is incorrect given their assigned roles.
329
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57784-exam-az-500-topic-2-question-58-discussion/) You have an Azure subscription that contains the custom roles shown in the following table. (Image 1 shows a table of existing custom roles, details not needed for the question itself). In the Azure portal, you plan to create new custom roles by cloning existing roles. The new roles will be configured as shown in the following table. (Image 2 shows a table of new custom roles to be created by cloning, details not needed for the question itself). Which roles can you clone to create each new role? (Image 3 shows a hot area for selecting answers, this is not needed for the question itself). **
** You can clone existing custom roles to create the new custom roles. Built-in Azure AD roles cannot be cloned, but built-in subscription roles and existing custom roles (of either type) can be. The specific existing roles needed to create the new roles are not specified in the provided text and images; only that existing custom roles are needed for cloning. **Why other options are incorrect:** The question does not provide alternative options to select. The discussion mentions that built-in Azure AD roles cannot be cloned, so attempting to clone those would be incorrect. **Note:** The discussion shows some disagreement on the specifics of cloning (Azure AD vs subscription roles), but the core answer remains consistent; you cannot clone built-in Azure AD roles, but you can clone existing custom roles and built-in subscription roles. The provided text and image do not include enough information to provide a definitive list of which specific existing roles should be cloned to create the specific new roles.
330
[View Question](https://www.examtopics.com/discussions/databricks/view/57792-exam-az-500-topic-1-question-2-discussion/) Your company has an Active Directory forest with a single domain, named weylandindustries.com. They also have an Azure Active Directory (Azure AD) tenant with the same name. You have been tasked with integrating Active Directory and the Azure AD tenant. You intend to deploy Azure AD Connect. Your strategy for the integration must make sure that password policies and user logon limitations affect user accounts that are synced to the Azure AD tenant, and that the amount of necessary servers are reduced. Solution: You recommend the use of pass-through authentication and seamless SSO with password hash synchronization. Does the solution meet the goal? A. Yes B. No
A. Yes The solution meets the goal because pass-through authentication ensures that password policies and logon limitations from the on-premises Active Directory are enforced on synced Azure AD accounts. Seamless SSO improves user experience. This combination reduces server requirements compared to a federation solution using AD FS (which requires additional servers). The use of password hash synchronization in conjunction with pass-through authentication is also a valid approach noted in the discussion, though there is some disagreement among users on whether it is strictly necessary. Password hash synchronization alone does *not* meet the goal due to its lack of enforcement of password policies. Why other options are incorrect: B. No: This is incorrect because pass-through authentication with seamless SSO effectively addresses both the requirement for enforcing password policies and minimizing server infrastructure. Note: There is some debate in the discussion regarding the necessity of password hash synchronization when using pass-through authentication. However, the core principle that pass-through authentication ensures policy enforcement remains consistent.
331
[View Question](https://www.examtopics.com/discussions/databricks/view/57793-exam-az-500-topic-4-question-28-discussion/) You have three on-premises servers named Server1, Server2, and Server3 that run Windows Server 2019. Server1 and Server2 are located on the internal network. Server3 is located on the perimeter network. All servers have access to Azure. From Azure Sentinel, you install a Windows firewall data connector. You need to collect Microsoft Defender Firewall data from the servers for Azure Sentinel. What should you do? A. Create an event subscription from Server1, Server2, and Server3. B. Install the On-premises data gateway on each server. C. Install the Microsoft Monitoring Agent on each server. D. Install the Microsoft Monitoring Agent on Server1 and Server2. Install the On-premises data gateway on Server3.
C. Install the Microsoft Monitoring Agent on each server. The Microsoft Monitoring Agent (MMA) is the correct solution for collecting Microsoft Defender Firewall data from on-premises servers and sending it to Azure Sentinel. The MMA is designed to collect various logs and metrics from Windows servers, including firewall events. Installing it on each server ensures comprehensive data collection. Why other options are incorrect: * **A. Create an event subscription from Server1, Server2, and Server3:** Event subscriptions are used for different purposes, such as reacting to events and triggering actions. They are not the primary mechanism for collecting security logs from servers for centralized monitoring in Azure Sentinel. * **B. Install the On-premises data gateway on each server:** The on-premises data gateway is used to connect on-premises data sources to cloud services like Power BI. While it might be involved in some data integration scenarios, it's not directly involved in collecting Microsoft Defender Firewall logs for Azure Sentinel. The discussion explicitly labels this as a "red herring". * **D. Install the Microsoft Monitoring Agent on Server1 and Server2. Install the On-premises data gateway on Server3:** This combines incorrect elements of options B and C. As explained above, the MMA is the appropriate agent, and there's no need for a separate on-premises data gateway for this specific task. Note: The discussion shows overwhelming agreement that C is the correct answer.
332
[View Question](https://www.examtopics.com/discussions/databricks/view/57796-exam-az-500-topic-4-question-29-discussion/) You have an Azure subscription that contains several Azure SQL databases and an Azure Sentinel workspace. You need to create a saved query in the workspace to find events reported by Azure Defender for SQL. What should you do? A. From Azure CLI, run the Get-AzOperationalInsightsWorkspace cmdlet. B. From the Azure SQL Database query editor, create a Transact-SQL query. C. From the Azure Sentinel workspace, create a Kusto query language query. D. From Microsoft SQL Server Management Studio (SSMS), create a Transact-SQL query.
C. From the Azure Sentinel workspace, create a Kusto query language query. Azure Sentinel uses Kusto Query Language (KQL) for querying data. Since the goal is to create a saved query within the Azure Sentinel workspace to find events from Azure Defender for SQL, using KQL within the Sentinel workspace is the correct approach. Azure Defender for SQL logs its findings to Azure Sentinel, and these logs are queried using KQL. Why other options are incorrect: * **A:** The Azure CLI command `Get-AzOperationalInsightsWorkspace` retrieves workspace information, but it doesn't create or execute queries within the workspace. * **B & D:** While Transact-SQL (T-SQL) is used to query Azure SQL databases directly, this question asks about querying *Azure Sentinel*, which uses KQL, not T-SQL. Therefore, querying the SQL databases themselves wouldn't provide the required events from Azure Defender for SQL within the Sentinel workspace. Note: The discussion strongly supports option C as the correct answer, with multiple users reporting seeing this question on the AZ-500 exam and confirming C as the correct answer.
333
**** [View Question](https://www.examtopics.com/discussions/databricks/view/58087-exam-az-500-topic-2-question-28-discussion/) Your company has an Azure subscription named Subscription1 that contains the users shown in the following table. | User Name | Role | |------------|----------------| | User1 | Account Admin | | User2 | Billing Admin | | User3 | Contributor | The company is sold to a new owner. The company needs to transfer ownership of Subscription1. Which user can transfer the ownership and which tool should the user use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** * **User:** User2 (Billing Administrator) * **Tool:** Azure Account Center **Explanation:** Based on the provided Microsoft documentation and the highly upvoted responses in the discussion, only the Billing Administrator can transfer ownership of an Azure subscription. The Azure Account Center is the tool used to manage billing and subscription transfers. **Why other options are incorrect:** The discussion shows some disagreement regarding whether the Account Admin or Billing Admin can perform the transfer. However, the majority of the discussion and the linked Microsoft documentation strongly support the Billing Admin's role in this process. Therefore, User2 (Billing Admin) is the correct answer. While the Azure portal is mentioned as a method for transferring ownership, the question doesn't explicitly limit the answer to the portal itself, and Azure Account Center remains a valid and appropriate tool for this task. Other tools mentioned (Azure CLI, Bash) are not the primary or recommended methods. **Note:** There is some conflicting information in the discussion regarding who (Account Admin vs. Billing Admin) can perform the subscription transfer. The answer provided reflects the majority opinion and the information directly referenced from Microsoft's documentation.
334
**** [View Question](https://www.examtopics.com/discussions/databricks/view/58259-exam-az-500-topic-4-question-59-discussion/) You have an Azure subscription that contains the following resources: ✑ A network virtual appliance (NVA) that runs non-Microsoft firewall software and routes all outbound traffic from the virtual machines to the internet ✑ An Azure function that contains a script to manage the firewall rules of the NVA ✑ Azure Security Center standard tier enabled for all virtual machines ✑ An Azure Sentinel workspace ✑ 30 virtual machines You need to ensure that when a high-priority alert is generated in Security Center for a virtual machine, an incident is created in Azure Sentinel and then a script is initiated to configure a firewall rule for the NVA. How should you configure Azure Sentinel to meet the requirements? To answer, drag the appropriate components to the correct requirements. Each component may be used once, more than once, or not at all. The image below shows the drag-and-drop question components. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0040700006.png) **
** The correct configuration involves using the following components in Azure Sentinel: 1. **Data connector for Security Center:** This is crucial to import Security Center alerts into Azure Sentinel. Without this, Sentinel won't receive the high-priority alerts from Security Center. 2. **Rule:** A rule in Azure Sentinel is needed to define the trigger (high-priority Security Center alert) to create an incident. This rule filters incoming alerts, selecting only those high-priority alerts from Security Center. 3. **Playbook:** This automates the response to the incident. The playbook would trigger the Azure function that manages the NVA firewall rules, thus configuring the necessary firewall rule in response to the high-priority alert. **Why other options are incorrect (implicitly):** The question requires a solution using these three components to achieve the desired workflow: Security Center alert -> Azure Sentinel incident -> Firewall rule update. Any solution omitting one of these three key components will fail to deliver the complete automated response chain. The suggested answer image (not included here but available at the original URL) visually confirms this solution. **Note:** The discussion section shows agreement on the suggested answer being correct. There is no indication of conflicting opinions or disagreements about the solution.
335
**** [View Question](https://www.examtopics.com/discussions/databricks/view/58276-exam-az-500-topic-1-question-4-discussion/) Your company has an Active Directory forest with a single domain, named weylandindustries.com. They also have an Azure Active Directory (Azure AD) tenant with the same name. You have been tasked with integrating Active Directory and the Azure AD tenant. You intend to deploy Azure AD Connect. Your strategy for the integration must make sure that password policies and user logon limitations affect user accounts that are synced to the Azure AD tenant, and that the amount of necessary servers are reduced. Solution: You recommend the use of password hash synchronization and seamless SSO. Does the solution meet the goal? A. Yes B. No **
** B. No The solution of using password hash synchronization (PHS) and seamless SSO does not meet the goal. While PHS synchronizes password hashes to Azure AD, enabling single sign-on (SSO) via Seamless SSO, it does *not* fully enforce on-premises password policies and logon restrictions on synced user accounts. Authentication happens primarily in the cloud after the hash synchronization, meaning Azure AD's own policies, which may differ from on-premises ones, will take precedence. Specifically, PHS may support some password complexity rules but does *not* support password expiration or other user logon restrictions. To fully enforce on-premises password policies and logon limitations, Pass-through Authentication would be a more appropriate solution. **Why other options are incorrect:** * **A. Yes:** This is incorrect because, as explained above, password hash synchronization alone does not fully enforce on-premises password policies and logon restrictions on Azure AD synced accounts. **Note:** There is a consensus among the discussion participants that option B is the correct answer. However, there is some nuance in the understanding of what aspects of password policy are and are not supported by PHS; some comments suggest partial support for complexity but not expiration.
336
[View Question](https://www.examtopics.com/discussions/databricks/view/58277-exam-az-500-topic-1-question-25-discussion/) You have an Azure virtual machine that runs Ubuntu 16.04-DAILY-LTS. You plan to deploy and configure an Azure Key vault, and enable Azure Disk Encryption for the virtual machine. Which of the following is TRUE with regards to Azure Disk Encryption for a Linux VM? A. It is NOT supported for basic tier VMs. B. It is NOT supported for standard tier VMs. C. OS drive encryption for Linux virtual machine scale sets is supported. D. Custom image encryption is supported.
The correct answer is **A. It is NOT supported for basic tier VMs.** Azure Disk Encryption does not support basic tier VMs for Linux. This is explicitly stated in the provided discussion and confirmed by multiple users. The discussion highlights several unsupported scenarios for Azure Disk Encryption on Linux VMs, including encrypting basic tier VMs. Why other options are incorrect: * **B. It is NOT supported for standard tier VMs:** This is false. The discussion indicates that Azure Disk Encryption *is* supported for standard tier VMs. * **C. OS drive encryption for Linux virtual machine scale sets is supported:** This is false. The discussion explicitly states that OS drive encryption for Linux virtual machine scale sets is NOT supported. * **D. Custom image encryption is supported:** This is false. The provided text clearly states that custom image encryption is NOT supported for Linux VMs. Note: The discussion shows a consensus among users regarding the correct answer.
337
[View Question](https://www.examtopics.com/discussions/databricks/view/58297-exam-az-500-topic-1-question-37-discussion/) You have been tasked with making sure that you are able to modify the operating system security configurations via Azure Security Center. To achieve your goal, you need to have the correct pricing tier for Azure Security Center in place. Which of the following is the pricing tier required? A. Advanced B. Premium C. Standard D. Free
The provided question and suggested answer are outdated and inaccurate. The original answer suggested "C. Standard," but this is incorrect based on the discussion. Azure Security Center has been rebranded to Microsoft Defender for Cloud, and the pricing model has significantly changed. The current approach involves turning Defender plans "on" or "off," rather than selecting from specific tiers like Standard, Premium, etc. To modify OS security configurations, Microsoft Defender for Servers Plan 2 is now recommended. Therefore, none of the original options are correct. WHY OTHER OPTIONS ARE INCORRECT: * **A. Advanced:** This option is irrelevant to the current Microsoft Defender for Cloud structure. * **B. Premium:** This option is also irrelevant to the current Microsoft Defender for Cloud structure. * **C. Standard:** While this was the suggested answer, the discussion clearly indicates this is outdated and incorrect. * **D. Free:** The free tier would not provide the capability to modify OS security configurations. NOTE: There is significant disagreement in the discussion regarding the validity of the question and the suggested answer due to changes in Azure Security Center's structure and rebranding to Microsoft Defender for Cloud. The answer provided reflects the current best practice based on the discussion.
338
[View Question](https://www.examtopics.com/discussions/databricks/view/58326-exam-az-500-topic-1-question-5-discussion/) Your company has an Active Directory forest with a single domain, named weylandindustries.com. They also have an Azure Active Directory (Azure AD) tenant with the same name. After syncing all on-premises identities to Azure AD, you are informed that users with a givenName attribute starting with LAB should not be allowed to sync to Azure AD. Which of the following actions should you take? A. You should make use of the Synchronization Rules Editor to create an attribute-based filtering rule. B. You should configure a DNAT rule on the Firewall. C. You should configure a network traffic filtering rule on the Firewall. D. You should make use of Active Directory Users and Computers to create an attribute-based filtering rule.
A. You should make use of the Synchronization Rules Editor to create an attribute-based filtering rule. The Synchronization Rules Editor in Azure AD Connect allows administrators to define custom rules to control which objects and attributes are synchronized between on-premises Active Directory and Azure AD. This is the appropriate tool to filter users based on the `givenName` attribute, preventing users whose `givenName` starts with "LAB" from being synced. Why other options are incorrect: * **B. You should configure a DNAT rule on the Firewall:** A DNAT (Destination Network Address Translation) rule changes the destination IP address of network traffic. This is irrelevant to filtering user synchronization based on Active Directory attributes. * **C. You should configure a network traffic filtering rule on the Firewall:** Similar to DNAT, a network traffic filtering rule controls network access, not the synchronization of user accounts between directories. * **D. You should make use of Active Directory Users and Computers to create an attribute-based filtering rule:** Active Directory Users and Computers manages on-premises Active Directory objects. It doesn't directly control what is synchronized to Azure AD. The synchronization process is managed by Azure AD Connect and its Synchronization Rules Editor. Note: The provided discussion shows a consensus among users that option A is the correct answer.
339
[View Question](https://www.examtopics.com/discussions/databricks/view/58355-exam-az-500-topic-1-question-26-discussion/) You have configured an Azure Kubernetes Service (AKS) cluster in your testing environment. You are currently preparing to deploy the cluster to the production environment. After disabling HTTP application routing, you want to replace it with an application routing solution that allows for reverse proxy and TLS termination for AKS services via a solitary IP address. You must create an AKS Ingress controller. Select `No adjustment required` if the underlined segment is accurate. If the underlined segment is inaccurate, select the accurate option. A. No adjustment required. B. a network security group C. an application security group D. an Azure Basic Load Balancer
A. No adjustment required. An AKS Ingress controller is the appropriate solution for providing reverse proxy and TLS termination for AKS services using a single IP address. It routes traffic to multiple services within the Kubernetes cluster, manages TLS termination, and acts as a reverse proxy, fulfilling the scenario's requirements. The discussion shows strong consensus that option A is correct. WHY OTHER OPTIONS ARE INCORRECT: * **B. a network security group:** Network Security Groups (NSGs) control network traffic to and from Azure resources, but they don't provide reverse proxy or TLS termination functionalities. * **C. an application security group:** Application Security Groups (ASGs) are used for managing network access within a virtual network, but they are not application routing solutions. * **D. an Azure Basic Load Balancer:** While Azure Load Balancers can distribute traffic, they don't inherently offer the reverse proxy and TLS termination features needed in this scenario. An Ingress controller is better suited for managing traffic within a Kubernetes cluster.
340
**** [View Question](https://www.examtopics.com/discussions/databricks/view/58357-exam-az-500-topic-1-question-30-discussion/) Your company has an Azure subscription linked to their Azure Active Directory (Azure AD) tenant. As a Global administrator for the tenant, part of your responsibilities involves managing Azure Security Center settings. You are currently preparing to create a custom sensitivity label. Solution: You start by creating a custom sensitive information type. Does the solution meet the goal? A. Yes B. No **
** A. Yes **Explanation:** While the question focuses on creating a custom sensitivity label, creating a custom sensitive information type is a necessary precursor or a helpful step in the process, especially for more complex scenarios. Custom sensitive information types define what constitutes sensitive data, which is then used as a condition when creating custom sensitivity labels. The labels then apply the protection (encryption, access restrictions, etc.) based on the identification of the sensitive information type. Therefore, creating the custom sensitive information type is a valid first step towards creating a custom sensitivity label. **Why other options are incorrect:** B. No: This is incorrect because creating a custom sensitive information type is a valid, and often necessary, step to define the criteria for the custom sensitivity label. It's not strictly *required* in all cases, which is the source of some discussion in the provided comments. However, it is part of a common and effective approach. **Note:** The discussion reveals disagreement regarding the necessity of creating a custom sensitive information type *before* creating a custom sensitivity label. Some argue that custom labels can be created independently, while others maintain that defining the sensitive information type first is best practice, particularly for more complex labeling schemes. The provided answer reflects the more comprehensive approach, where defining sensitive information types first allows for better precision and control in label creation.
341
**** [View Question](https://www.examtopics.com/discussions/databricks/view/58358-exam-az-500-topic-1-question-39-discussion/) Your company's Azure subscription is linked to their Azure Active Directory (Azure AD) tenant. After an internally developed application is registered in Azure AD, you are tasked with making sure that the application has the ability to access Azure Key Vault secrets on application the users' behalf. Solution: You configure a delegated permission with no admin consent. Does the solution meet the goal? A. Yes B. No **
** A. Yes The solution meets the goal. As explained in the discussion, registering an Azure AD app and granting it the "user_impersonation" delegated permission in Azure Key Vault allows the application to access secrets on behalf of users without requiring admin consent. The "Application Permissions" option is noted as being dimmed out, which further supports the correct approach using delegated permissions. **WHY OTHER OPTIONS ARE INCORRECT:** B. No: This is incorrect based on the explanation provided in the upvoted comments. The "user_impersonation" delegated permission specifically enables an application to access resources (like Key Vault secrets) on behalf of a user without needing administrator-level consent. While there is a dissenting opinion in the discussion, the majority suggests that option A is correct. **NOTE:** There is some disagreement in the discussion regarding the solution's correctness. While the majority of comments support option A, one comment suggests option B. The answer provided here reflects the majority opinion and explanation given in the discussion.
342
**** [View Question](https://www.examtopics.com/discussions/databricks/view/58359-exam-az-500-topic-1-question-32-discussion/) Your company uses Azure DevOps with branch policies configured. Which of the following is TRUE with regards to branch policies? (Choose all that apply.) A. It enforces your team's change management standards. B. It controls who can read and update the code in a branch. C. It enforces your team's code quality. D. It places a branch into a read-only state. **
** A and C are correct. Branch policies in Azure DevOps are designed to enforce team standards related to change management and code quality. Option A is correct because branch policies define requirements (like work item linking or approval checks) that must be met before a pull request can be merged. These requirements directly enforce change management procedures. Option C is correct because branch policies can mandate that code must pass automated builds and tests before being merged, thus enforcing code quality standards. Option B is incorrect. While branch policies can indirectly influence who can access and modify code (e.g., through requirements linked to specific users or groups), their primary function is not access control. Access control is managed through different Azure DevOps features, such as permissions and security groups. Option D is incorrect. Branch policies do not inherently place a branch into a read-only state. A branch's accessibility is determined by permissions and access control settings, independent of branch policies. Branch policies are concerned with *how* changes are made to a branch, not whether a branch is read-only or not. **Note:** The discussion shows a strong consensus that options A and C are the correct answers. There is no evidence of significant disagreement within the provided discussion.
343
[View Question](https://www.examtopics.com/discussions/databricks/view/58690-exam-az-500-topic-1-question-6-discussion/) You have been tasked with applying conditional access policies for your company's current Azure Active Directory (Azure AD). The process involves assessing the risk events and risk levels. Which of the following is the risk level that should be configured for users that have leaked credentials? A. None B. Low C. Medium D. High
D. High Leaked credentials represent a significant security risk. Malicious actors possessing valid username and password combinations can easily compromise accounts, leading to data breaches and other security incidents. Therefore, assigning a "High" risk level is appropriate for users with leaked credentials in Azure AD Identity Protection. While the Microsoft documentation provides less specific details on risk level calculations than previously, the consensus among the discussion participants is that leaked credentials warrant a "High" risk level designation. The discussion notes that this is a subjective judgement based on the severity of the threat. Why other options are incorrect: * **A. None:** Ignoring leaked credentials is unacceptable; it leaves the organization vulnerable to attacks. * **B. Low:** Leaked credentials pose a much greater risk than a "Low" level suggests. * **C. Medium:** While a "Medium" risk might apply to some security events, leaked credentials represent a more severe threat. Note: The provided discussion highlights that Microsoft's documentation on risk levels has changed and is less prescriptive, leading to some interpretation on the specific risk level assignment. However, the consensus remains that leaked credentials should be treated as a high risk.
344
[View Question](https://www.examtopics.com/discussions/databricks/view/59370-exam-az-500-topic-1-question-14-discussion/) Your Company's Azure subscription includes a virtual network that has a single subnet configured. You have created a service endpoint for the subnet, which includes an Azure virtual machine that has Ubuntu Server 18.04 installed. You are preparing to deploy Docker containers to the virtual machine. You need to make sure that the containers can access Azure Storage resources and Azure SQL databases via the service endpoint. You need to perform a task on the virtual machine prior to deploying containers. Solution: You create an AKS Ingress controller. Does the solution meet the goal? A. Yes B. No
B. No. Creating an AKS Ingress controller is not the correct solution. An AKS Ingress controller is used for managing inbound traffic to applications *within* an AKS cluster, not for enabling individual Docker containers on a virtual machine to access Azure services via a service endpoint. The containers should be configured to use the service endpoint directly. The discussion highlights disagreement on the precise nature of why an ingress controller is irrelevant, but the consensus is that it's not the needed solution.
345
[View Question](https://www.examtopics.com/discussions/databricks/view/59499-exam-az-500-topic-1-question-1-discussion/) Your company recently created an Azure subscription. You have been tasked with making sure that a specified user is able to implement Azure AD Privileged Identity Management (PIM). Which of the following is the role you should assign to the user? A. The Global administrator role. B. The Security administrator role. C. The Password administrator role. D. The Compliance administrator role.
A. The Global administrator role. A Global administrator has the necessary permissions to enable and configure Azure AD Privileged Identity Management (PIM). This allows them to manage the assignment of privileged roles to other users through PIM. Why other options are incorrect: * **B. The Security administrator role:** Security administrators have broad security-related responsibilities but lack the authority to enable or configure PIM. * **C. The Password administrator role:** This role is limited to password management and does not provide PIM-related permissions. * **D. The Compliance administrator role:** This role focuses on compliance-related tasks and does not grant PIM configuration capabilities. Note: There is some discussion in the provided text suggesting that the answer may not be entirely accurate in all scenarios, as some users suggest that any user with an active role can enable PIM. However, the consensus and the most likely correct answer based on the context of assigning PIM *implementation* capabilities to a specific user points to the Global Administrator role. The question's focus on *implementing* PIM, and not simply using it, is crucial here.
346
[View Question](https://www.examtopics.com/discussions/databricks/view/59820-exam-az-500-topic-1-question-15-discussion/) Your Company's Azure subscription includes a virtual network that has a single subnet configured. You have created a service endpoint for the subnet, which includes an Azure virtual machine that has Ubuntu Server 18.04 installed. You are preparing to deploy Docker containers to the virtual machine. You need to make sure that the containers can access Azure Storage resources and Azure SQL databases via the service endpoint. You need to perform a task on the virtual machine prior to deploying containers. Solution: You install the container network interface (CNI) plug-in. Does the solution meet the goal? A. Yes B. No
B. No The solution does not meet the goal. While a CNI plugin is beneficial for networking containers within a virtual network, it's not necessary for containers on a standard Ubuntu VM to access Azure Storage and SQL databases via a service endpoint. If the VM is already within the subnet with the service endpoint, the containers inherit the VM's network configuration and can access those resources without installing a CNI plugin. The CNI plugin is more relevant in scenarios involving AKS or Kubernetes clusters. The discussion highlights this disagreement – some believe the CNI plugin is necessary, while others (correctly) argue that it's unnecessary given the problem description. Why other options are incorrect: A. Yes: This is incorrect because installing the CNI plugin is not required for achieving the stated goal in this specific scenario. The VM already has network access; the containers inherit this. The CNI plugin is designed for more complex container orchestration scenarios.
347
**** [View Question](https://www.examtopics.com/discussions/databricks/view/59821-exam-az-500-topic-1-question-13-discussion/) Your Company's Azure subscription includes a virtual network that has a single subnet configured. You have created a service endpoint for the subnet, which includes an Azure virtual machine that has Ubuntu Server 18.04 installed. You are preparing to deploy Docker containers to the virtual machine. You need to make sure that the containers can access Azure Storage resources and Azure SQL databases via the service endpoint. You need to perform a task on the virtual machine prior to deploying containers. Solution: You create an application security group. Does the solution meet the goal? A. Yes B. No **
** B. No The solution of creating an Application Security Group (ASG) is unnecessary. A service endpoint already allows the virtual machine (and therefore the containers running within it) to access Azure Storage and Azure SQL databases directly. The service endpoint provides the necessary network connectivity. An ASG adds an extra layer of security control, but it's not required to fulfill the stated goal of enabling access via the existing service endpoint. Therefore, creating an ASG doesn't meet the goal and is redundant in this specific scenario. **Why other options are incorrect:** A. Yes - Incorrect because the service endpoint already provides the necessary access. Adding an ASG is not required to meet the objective. **Note:** While the primary answer is that the ASG is unnecessary, some discussion points suggest that in production environments with more complex security needs (custom subnets, zero trust principles), additional security measures like ASGs or Network Virtual Appliances (NVAs) might be implemented. However, based solely on the provided problem statement, the ASG is redundant.
348
[View Question](https://www.examtopics.com/discussions/databricks/view/59894-exam-az-500-topic-1-question-28-discussion/) Your company has an Azure subscription linked to its Azure Active Directory (Azure AD) tenant. As a Global administrator for the tenant, you manage Azure Security Center settings. You are preparing to create a custom sensitivity label. Solution: You start by altering the pricing tier of the Security Center. Does the solution meet the goal? A. Yes B. No
B. No Altering the pricing tier of Azure Security Center (now Microsoft Defender for Cloud) does not enable the creation of custom sensitivity labels. Custom sensitivity labels are a feature of Microsoft Purview Information Protection, a separate service. Therefore, the proposed solution is incorrect. The steps to create a custom sensitivity label involve creating a sensitive info type, defining patterns, choosing confidence levels, defining primary elements, setting character proximity values, and selecting a recommended confidence level. Why other options are incorrect: A. Yes - Incorrect. This option incorrectly states that changing the Security Center pricing tier is sufficient for creating custom sensitivity labels. The two are unrelated services. Note: There is some discussion regarding the obsolescence of the question due to changes in Microsoft's product naming and functionality (Azure Security Center is now Microsoft Defender for Cloud, and Microsoft Purview is required for data sensitivity classification). The core concept, however, remains valid: creating custom sensitivity labels is not a function of Defender for Cloud's pricing tier.
349
[View Question](https://www.examtopics.com/discussions/databricks/view/61213-exam-az-500-topic-5-question-5-discussion/) You have an Azure SQL Database server named SQL1. For SQL1, you turn on Azure Defender for SQL to detect all threat detection types. Which action will Azure Defender for SQL detect as a threat? A. A user updates more than 50 percent of the records in a table. B. A user attempts to sign in as SELECT * FROM table1. C. A user is added to the db_owner database role. D. A user deletes more than 100 records from the same table.
B. A user attempts to sign in as `SELECT * FROM table1`. Azure Defender for SQL is designed to detect suspicious activities, and attempting to sign in using a SQL query (like `SELECT * FROM table1`) is a clear indication of a potential SQL injection attack. This is a common attack vector, and Azure Defender is likely configured to flag such attempts as threats. Option A, updating a large percentage of records, might be suspicious depending on context, but it's not inherently a security threat in the same way as SQL injection. Option C, adding a user to the `db_owner` role, is a legitimate administrative action. Option D, deleting a large number of records, might also raise suspicion depending on context, but it isn't automatically flagged as a security threat like a direct SQL injection attempt. Note: The discussion shows a strong consensus that B is the correct answer, with multiple users reporting it as being correct on various exam dates.
350
[View Question](https://www.examtopics.com/discussions/databricks/view/61214-exam-az-500-topic-1-question-35-discussion/) Your company's Azure subscription includes a hundred virtual machines that have Azure Diagnostics enabled. You have been tasked with retrieving the identity of the user that removed a virtual machine fifteen days ago. You have already accessed Azure Monitor. Which of the following options should you use? A. Application Log B. Metrics C. Activity Log D. Logs
C. Activity Log The Activity log in Azure Monitor records all management operations performed on resources within a subscription. This includes actions like creating, modifying, and deleting virtual machines. Therefore, to find the identity of the user who deleted a VM fifteen days ago, the Activity Log is the appropriate tool. Why other options are incorrect: * **A. Application Log:** Application logs contain data generated by applications running on the virtual machines themselves. This is not relevant for identifying who performed a management action on the Azure resource. * **B. Metrics:** Metrics provide performance data about resources, not audit information about management operations. * **D. Logs:** This is too general. Azure Monitor contains multiple types of logs. The specific log needed is the Activity Log. Note: The discussion section shows multiple exam takers selected and confirmed answer C as correct.
351
[View Question](https://www.examtopics.com/discussions/databricks/view/61215-exam-az-500-topic-1-question-36-discussion/) Your company's Azure subscription includes a hundred virtual machines that have Azure Diagnostics enabled. You have been tasked with analyzing the security events of a Windows Server 2016 virtual machine. You have already accessed Azure Monitor. Which of the following options should you use? A. Application Log B. Metrics C. Activity Log D. Logs
D. Logs Azure Monitor Logs is the correct option for analyzing security events from a Windows Server 2016 VM with Azure Diagnostics enabled. Azure Monitor Logs aggregates logs from various sources, including the Windows Event Logs, which contain security event information. The other options are incorrect because: * **A. Application Log:** This refers to a specific log *within* the Windows Event Logs, not the overarching Azure service for log analysis. * **B. Metrics:** Metrics provide performance data, not security events. * **C. Activity Log:** The Activity Log tracks management operations on Azure resources, not the security events within a VM. The discussion shows overwhelming agreement on this answer.
352
[View Question](https://www.examtopics.com/discussions/databricks/view/62351-exam-az-500-topic-1-question-34-discussion/) Your company's Azure subscription includes an Azure Log Analytics workspace. Your company has a hundred on-premises servers that run either Windows Server 2012 R2 or Windows Server 2016, and is linked to the Azure Log Analytics workspace. The Azure Log Analytics workspace is set up to gather performance counters associated with security from these linked servers. You have been tasked with configuring alerts according to the information gathered by the Azure Log Analytics workspace. You have to make sure that alert rules allow for dimensions, and that alert creation time should be kept to a minimum. Furthermore, a single alert notification must be created when the alert is created and when the alert is sorted out. You need to make use of the necessary signal type when creating the alert rules. Which of the following is the option you should use? A. You should make use of the Activity log signal type. B. You should make use of the Application Log signal type. C. You should make use of the Metric signal type. D. You should make use of the Audit Log signal type.
C. You should use the Metric signal type. Performance counters directly relate to metrics. The question explicitly states that the Azure Log Analytics workspace is gathering *performance counters*. Metric signals are designed for this type of data, allowing for efficient alert creation and the use of dimensions. The other options are unsuitable because they don't directly handle performance counter data. Why other options are incorrect: * **A. Activity log signal type:** Activity logs track administrative actions within Azure, not performance data. * **B. Application Log signal type:** Application logs contain application-specific events, not system performance metrics. * **D. Audit Log signal type:** Audit logs record security-related events, but not the performance metrics gathered from the servers. Note: The provided discussion shows a consensus that the correct answer is C.
353
[View Question](https://www.examtopics.com/discussions/databricks/view/62791-exam-az-500-topic-1-question-22-discussion/) You have been tasked with delegating administrative access to your company's Azure key vault. You have to make sure that a specific user can set advanced access policies for the key vault. You also have to make sure that access is assigned based on the principle of least privilege. Which of the following options should you use to achieve your goal? A. Azure Information Protection B. RBAC C. Azure AD Privileged Identity Management (PIM) D. Azure DevOps
The correct answer is B. RBAC (Role-Based Access Control). RBAC allows granular assignment of permissions to users and groups, enabling the principle of least privilege. To manage advanced access policies in Azure Key Vault, a specific RBAC role (like "Key Vault Contributor") grants the necessary permissions without providing unnecessary access. This directly addresses the question's requirements. Option A (Azure Information Protection) is incorrect because it deals with data classification and protection, not access control to Azure Key Vault. Option C (Azure AD Privileged Identity Management) is incorrect as it's used for managing privileged accounts, which is not the primary focus here; while PIM might be involved in a more comprehensive access management strategy, RBAC is the more direct and appropriate solution for this specific task. Option D (Azure DevOps) is incorrect because it's a platform for software development and doesn't relate to Azure Key Vault access control. The discussion shows some disagreement regarding the formatting of the original question options, but there's a clear consensus that the correct answer is B.
354
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62846-exam-az-500-topic-2-question-60-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains three security groups named Group1, Group2, and Group3 and the users shown in the following table. | User | Group1 | Group2 | Group3 | | -------- | ------- | ------- | ------- | | User1 | Yes | No | No | | User2 | No | Yes | Yes | | User3 | No | Yes | No | | User4 | No | No | Yes | Group3 is a member of Group2. In contoso.com, you register an enterprise application named App1 that has the following settings: ✑ Owners: User1 ✑ Users and groups: Group2 You configure the properties of App1 as shown in the following exhibit. (The exhibit shows App1's assignment properties, indicating that users and groups assigned to App1 can access it.) For each of the following statements, select Yes if the statement is true. Otherwise, select No. Statement 1: User1 can sign in to App1. Statement 2: User2 can sign in to App1. Statement 3: User4 can sign in to App1. **
** No, Yes, Yes. * **Statement 1: No.** User1 is only an owner of App1, not a user assigned to it. Ownership doesn't grant access to the application itself; it only grants administrative rights. * **Statement 2: Yes.** User2 is a member of Group2, and Group2 is assigned to App1. Therefore, User2 has access to App1. * **Statement 3: Yes.** User4 is a member of Group3, and Group3 is a member of Group2. Since Group2 is assigned access to App1, all members of nested groups also have access. **Why other options are incorrect:** The suggested answer (No, Yes, No) is incorrect regarding User4. The transitive membership of groups correctly grants User4 access to App1. The discussion shows agreement with the correct answer "No, Yes, Yes."
355
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62849-exam-az-500-topic-4-question-67-discussion/) You have an Azure subscription that contains three storage accounts, an Azure SQL managed instance named SQL1, and three Azure SQL databases. The storage accounts are configured as shown in the following table. | Storage Account Name | Account Type | |---|---| | storage1 | General-purpose v2 | | storage2 | Premium | | storage3 | Standard | SQL1 has the following settings: * Auditing: On * Audit log destination: storage1 The Azure SQL databases are configured as shown in the following table. | Database Name | Auditing | Audit Log Destination | |---|---|---| | DB1 | Off | | | DB2 | On | storage2 | | DB3 | On | | For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. Auditing is enabled for DB1. 2. Auditing is enabled for DB2. 3. Auditing is enabled for DB3. **
** Y Y N * **Statement 1 (Auditing is enabled for DB1):** No. The table shows that auditing is explicitly "Off" for DB1. Server-level auditing (enabled for SQL1 and targeting storage1) does *not* automatically enable auditing at the database level. * **Statement 2 (Auditing is enabled for DB2):** Yes. The table shows that auditing is explicitly "On" for DB2, with the audit log destination set to storage2. * **Statement 3 (Auditing is enabled for DB3):** No. While server-level auditing is enabled for SQL1, it is not enabled at the database level for DB3 (the "Auditing" column is blank/off for DB3). **Explanation of why other options are incorrect:** The discussion shows significant disagreement on the correct answer. Some users believe all three statements should be "Yes," based on interpretations of server-level auditing implications. Others correctly identify that database-level auditing must be explicitly enabled. The provided answer reflects the correct interpretation that auditing needs to be explicitly enabled at the database level; server-level auditing does not automatically propagate to databases. The differing opinions highlight the ambiguity in the question's wording and the need for clear understanding of database and server-level auditing configurations in Azure SQL Managed Instance. Note that there is also debate regarding supported storage account types. However, the core issue of database-level auditing being explicitly required is independent of storage account considerations.
356
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63001-exam-az-500-topic-2-question-62-discussion/) Your network contains an on-premises Active Directory domain named adatum.com that syncs to Azure Active Directory (Azure AD). The Azure AD tenant contains the users shown in the following table. (Image shows a table of users). You configure the Authentication methods "Password Protection settings for adatum.com as shown in the following exhibit. (Image shows Password Protection settings with options for Audit mode and Enforcement mode, along with a custom block list). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image shows a table with three statements to evaluate: 1. Azure AD Password Protection evaluates all existing passwords in adatum.com. 2. If Audit mode is enabled, Azure AD Password Protection blocks all passwords that do not meet the password protection policy. 3. If Audit mode is enabled, Azure AD Password Protection logs all passwords that do not meet the password protection policy.) **
** No, Yes, Yes 1. **No:** Azure AD Password Protection does *not* evaluate existing passwords. It only validates passwords during password change or set operations. Passwords already in Active Directory before the deployment of Azure AD Password Protection will remain unaffected. This is explicitly stated in the provided Microsoft documentation link within the discussion. 2. **Yes:** When Audit mode is enabled, Azure AD Password Protection *does not* block passwords that fail the policy. Instead, it logs them as events. The difference between Audit and Enforce mode is that Audit mode only logs the violations without blocking the passwords. 3. **Yes:** In Audit mode, Azure AD Password Protection logs all passwords that do not meet the password protection policy. This logging allows administrators to assess the impact of the policy before enforcing it. **Why other options are incorrect:** The discussion highlights the key distinction between Audit and Enforce modes. Statement 1 is incorrect because the system only validates passwords during changes, not pre-existing ones. Statement 2 would be correct *only* if the question referred to Enforce mode. **Note:** There is some minor disagreement in the discussion regarding the interpretation of the "Enforce Custom List" setting and its interaction with Audit mode. However, the consensus and the provided Microsoft documentation support the answer given above.
357
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63113-exam-az-500-topic-2-question-63-discussion/) Your company has an Azure subscription named Subscription1. Subscription1 is associated with the Azure Active Directory tenant that includes the users shown in the following table. | User Name | Role | |---|---| | User1 | Owner | | User2 | Contributor | | User3 | Reader | | User4 | Account Administrator | | User5 | Billing Administrator | The company is sold to a new owner. The company needs to transfer ownership of Subscription1. Which user can transfer the ownership and which tool should the user use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. (Note: Images showing a hotspot question and suggested answers are omitted here as they are not directly reproducible in this text format. The key information is extracted within the question and answer.) **
** The correct answer is User5 (Billing Administrator) using the Azure portal. **Explanation:** The discussion reveals a discrepancy in opinions. While some users indicate that only the Billing Administrator can transfer ownership, others suggest it's the Account Administrator. However, the prevailing and most strongly supported consensus, backed by multiple users and referencing Microsoft documentation (https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/billing-subscription-transfer), points to the Billing Administrator as the user with the necessary permissions. Therefore, User5 (Billing Administrator) is the correct choice. The Azure portal is the standard tool for managing Azure subscriptions. **Why other options are incorrect:** * **User1 (Owner):** While the owner has extensive control, they are not the designated user for transferring ownership. The Owner role is distinct from the Billing Administrator. * **User2 (Contributor):** Contributors only have permissions to manage resources, not ownership transfer. * **User3 (Reader):** Readers have only read-only access. * **User4 (Account Administrator):** While some debate exists, the majority of evidence points towards the Billing Administrator as the role with the specific right to transfer ownership according to Microsoft's documentation. **Note:** The discussion highlights some ambiguity regarding the precise role required (Account Administrator vs. Billing Administrator). This answer reflects the dominant perspective in the discussion, supported by user votes and reference to official documentation. Further research might be needed to definitively resolve this ambiguity if differing versions of the Azure portal exist.
358
[View Question](https://www.examtopics.com/discussions/databricks/view/63115-exam-az-500-topic-5-question-4-discussion/) HOTSPOT - You have an Azure key vault named KeyVault1 that contains the items shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0046200002.png) *(Image shows a table with items: Item1 (key), Item2 (secret), Policy1 (policy))* In KeyVault1, the following events occur in sequence: ✑ Item1 is deleted. ✑ Item2 and Policy1 are deleted. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0046300003.jpg) *(Image shows three boxes for Yes/No answers corresponding to the following statements)* Box1: You can recover Policy1. Box2: You can add a new key named Item1. Box3: You can recover Item2.
Box 1: No Box 2: No Box 3: Yes **Explanation:** * **Box 1:** Policies cannot be recovered, even with soft delete enabled. While secrets and keys can be recovered via soft delete, policies are a special case and are not recoverable. * **Box 2:** Soft delete is enabled by default in Azure Key Vaults. Because Item1 was deleted, it remains in a soft-deleted state. You cannot create a new key with the same name while the soft-deleted version exists. * **Box 3:** Item2 is a secret, and since soft delete is enabled by default, it can be recovered from the soft-delete state within the retention period. **Why other options are incorrect:** The discussion shows some disagreement about whether to assume soft delete is enabled. While it is the default, the question doesn't explicitly state it. However, the suggested answer and the majority of the discussion lean toward the assumption that soft delete is enabled, aligning with the provided answer. The consensus view reflects the provided answer, though a technically correct answer could argue for uncertainty regarding soft delete's enabled status if the problem statement lacked such clarification.
359
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63179-exam-az-500-topic-2-question-61-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0016000001.png) You need to ensure that ServerAdmins can perform the following tasks: ✑ Create virtual machines in RG1 only. ✑ Connect the virtual machines to the existing virtual networks in RG2 only. The solution must use the principle of least privilege. Which two role-based access control (RBAC) roles should you assign to ServerAdmins? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. a custom RBAC role for RG2 B. the Network Contributor role for RG2 C. the Contributor role for the subscription D. a custom RBAC role for the subscription E. the Network Contributor role for RG1 F. the Virtual Machine Contributor role for RG1 **
** A and F The correct answer is A (a custom RBAC role for RG2) and F (the Virtual Machine Contributor role for RG1). * **F (Virtual Machine Contributor role for RG1):** This role allows users to create VMs within RG1, fulfilling the first requirement. The principle of least privilege is adhered to because it only grants VM creation permissions within the specified resource group. * **A (Custom RBAC role for RG2):** While option B (Network Contributor for RG2) might seem tempting, it grants excessive permissions. A custom role for RG2 allows precise control, limiting permissions to *only* what's needed to connect VMs to the existing virtual networks in RG2. This ensures the principle of least privilege is followed. This custom role would only include the necessary actions related to virtual network connectivity. **Why other options are incorrect:** * **B (Network Contributor role for RG2):** This role provides broader permissions than necessary within RG2. It grants more access than strictly required for connecting VMs to existing networks, violating the principle of least privilege. * **C (Contributor role for the subscription):** This grants excessive permissions across the entire subscription. It violates the principle of least privilege significantly. * **D (Custom RBAC role for the subscription):** Similar to C, creating a custom role at the subscription level still grants too much access. While offering customization, it's not the most restrictive approach. * **E (Network Contributor role for RG1):** This role is unnecessary for creating VMs in RG1; the Virtual Machine Contributor role (F) already covers that. Furthermore, it grants unnecessary network-related permissions within RG1. **Note:** There is some disagreement in the discussion regarding the best approach for RG2. Some suggest a custom role is necessary for the principle of least privilege, while others initially proposed the Network Contributor role. The consensus reflected in the accepted answer is that a custom role provides the necessary level of granularity to meet the principle of least privilege requirement.
360
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63266-exam-az-500-topic-6-question-1-discussion/) You need to meet the identity and access requirements for Group1. What should you do? A. Add a membership rule to Group1. B. Delete Group1. Create a new group named Group1 that has a group type of Microsoft 365. Add users and devices to the group. C. Modify the membership rule of Group1. D. Change the membership type of Group1 to Assigned. Create two groups that have dynamic memberships. Add the new groups to Group1. **
** D. Change the membership type of Group1 to Assigned. Create two groups that have dynamic memberships. Add the new groups to Group1. **Explanation:** The discussion highlights that a single dynamic group cannot contain both users and devices. Option D addresses this by creating two separate dynamic groups (one for users, one for devices) and then adding them as members of Group1, which is changed to an "assigned" membership type. This allows for managing users and devices separately within the group structure. Options A and C are incorrect because they don't account for the constraint of having both users and devices in the group, and you can't create a single dynamic group with both. Option B is incorrect because Microsoft 365 groups cannot include devices. **Note:** There is a disagreement in the discussion regarding whether nested groups (option D) are suitable for providing access to applications. While the majority opinion supports option D, one user argues that nested groups do not allow access to cascade to nested members when assigning to applications. Therefore, option D might be problematic depending on the specific application requirements, beyond simply meeting membership needs.
361
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63290-exam-az-500-topic-4-question-64-discussion/) You have an Azure subscription named Subscription1 that contains a resource group named RG1 and the users shown in the following table. | User Name | User Type | |---|---| | User1 | External | | User2 | External | You perform the following tasks: * Assign User1 the Network Contributor role for Subscription1. * Assign User2 the Contributor role for RG1. To Subscription1 and RG1, you assign the following policy definition: *External accounts with write permissions should be removed from your subscription*. What is the Compliance State of the policy assignments? A. The Compliance State of both policy assignments is Non-compliant. B. The Compliance State of the policy assignment to Subscription1 is Compliant, and the Compliance State of the policy assignment to RG1 is Non-compliant. C. The Compliance State of the policy assignment to Subscription1 is Non-compliant, and the Compliance State of the policy assignment to RG1 is Compliant. D. The Compliance State of both policy assignments is Compliant. **
** A. The Compliance State of both policy assignments is Non-compliant. **Explanation:** Both User1 and User2 are external accounts. The Network Contributor role (assigned to User1 at the subscription level) and the Contributor role (assigned to User2 at the resource group level) both grant write permissions. The policy explicitly states that external accounts with write permissions should be removed. Since both users are external and have write permissions, both policy assignments are non-compliant. **Why other options are incorrect:** * **B, C, and D:** These options incorrectly assume that either one or both assignments are compliant. As explained above, both users violate the policy. **Note:** The discussion shows unanimous agreement on answer A.
362
[View Question](https://www.examtopics.com/discussions/databricks/view/63291-exam-az-500-topic-4-question-65-discussion/) HOTSPOT - You have an Azure Sentinel workspace that has the following data connectors: ✑ Azure Active Directory Identity Protection ✑ Common Event Format (CEF) Azure Firewall - ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0041300006.png) You need to ensure that data is being ingested from each connector. From the Logs query window, which table should you query for each connector? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0041400001.png)
The correct tables to query are: * **Azure Active Directory Identity Protection:** `SecurityAlert` * **Common Event Format (CEF):** `CommonSecurityLog` * **Azure Firewall:** `AzureDiagnostics` This is based on the provided documentation links and the highly upvoted responses in the discussion. The `SecurityAlert` table stores data from Azure Active Directory Identity Protection, `CommonSecurityLog` receives CEF data, and `AzureDiagnostics` contains data from Azure Firewall. Why other options are incorrect: The question doesn't provide alternative options, but the discussion shows no disagreement on these mappings in the highly upvoted answers. Any other table would not contain the logs from these specific connectors. Note: The discussion mentions that remembering these mappings might be difficult due to the large number of connectors. Some users suggest referencing official Microsoft documentation during the exam if it's an open-book format.
363
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63294-exam-az-500-topic-15-question-1-discussion/) HOTSPOT - You assign User8 the Owner role for RG4, RG5, and RG6. In which resource groups can User8 create virtual networks and NSGs by using the Azure portal? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **Box 1:** In which resource groups can User8 create virtual networks? **Box 2:** In which resource groups can User8 create NSGs? **
** **Box 1: RG6 only.** User8 can only create virtual networks in RG6. The provided discussion and suggested answers indicate that RG4 only allows the creation of NSGs, and RG5 prevents the creation of both virtual networks and subnets. While some users argued that creating a VNET without a subnet is possible in RG5, the majority consensus and the suggested answer support RG6 as the only resource group allowing VNET creation. **Box 2: RG4 and RG6 only.** User8 can create NSGs in both RG4 and RG6. RG4 explicitly allows only NSG creation. RG5 prevents NSG creation. RG6 does not restrict NSG creation. **Why other options are incorrect:** The discussion shows disagreement on the specifics, particularly regarding whether a virtual network can be created in RG5 without a subnet. However, the prevailing and suggested answer leans towards RG6 as the only option for creating virtual networks. The consensus on NSGs is more straightforward, with RG4 and RG6 being universally accepted as the only resource groups allowing their creation.
364
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63304-exam-az-500-topic-5-question-38-discussion/) You have an Azure subscription that contains the following resources: ✑ An Azure key vault ✑ An Azure SQL database named Database1 Two Azure App Service web apps named AppSrv1 and AppSrv2 that are configured to use system-assigned managed identities and access Database1 You need to implement an encryption solution for Database1 that meets the following requirements: ✑ The data in a column named Discount in Database1 must be encrypted so that only AppSrv1 can decrypt the data. ✑ AppSrv1 and AppSrv2 must be authorized by using managed identities to obtain cryptographic keys. How should you configure the encryption settings for Database1? Select the appropriate options. NOTE: Each correct selection is worth one point. (Image depicting a Hotspot question with checkboxes was present in original context, but cannot be reproduced here). **
** The solution is to use **Always Encrypted with Azure Key Vault**. This allows you to encrypt sensitive data (like the `Discount` column) at rest and in use, and manage the encryption keys securely within Azure Key Vault. AppSrv1 and AppSrv2, using their system-assigned managed identities, would then be granted access policies within the Key Vault to obtain the necessary keys for decryption. Only granting AppSrv1 access to the specific key that encrypts the `Discount` column ensures only AppSrv1 can decrypt it. This directly addresses both requirements. The discussion strongly supports this solution, with multiple users agreeing on the use of Always Encrypted and the creation of Access Policies within the Azure Key Vault to control access for the managed identities. **Why other options are incorrect:** The question doesn't provide other options, but implicitly, any solution that doesn't involve Always Encrypted with appropriate Key Vault access policies would be incorrect because it wouldn't meet the requirement of encrypting the data while allowing only AppSrv1 to decrypt it. Options such as using simpler encryption methods or granting overly permissive access would not fulfil the security and access control requirements.
365
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63432-exam-az-500-topic-2-question-59-discussion/) You have an Azure subscription that contains the Azure Active Directory (Azure AD) resources shown in the following table. (Image shows table with User1, User2, Group1, Group2, Managed1, Managed2, EnterpriseApp1, EnterpriseApp2) You create the groups shown in the following table. (Image shows table with Group5 (Security group) and Group6 (Security Group)) Which resources can you add to Group5 and Group6? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. (Image shows a selectable answer area) **
** Group5: User1, Group1, Managed1, EnterpriseApp1, EnterpriseApp2 Group6: User1, User2, Group1, Group2, Managed1, Managed2 **Explanation:** Based on the provided text and the highly upvoted comments, security groups (like Group5 and Group6) can contain users, groups, and managed identities (service principals). Enterprise applications can also be added to security groups. Therefore, all listed resources can be added to Group5. Group6 can contain the same resources as group 5 as they are both security groups. The discussion highlights that the correct answers were tested and verified, and provides a Microsoft Learn link supporting the ability to add users, devices, other groups, and service principals to security groups. There's some ambiguity due to lack of complete information in the provided images. The exact options in the answer area are missing so the provided answer is an aggregate of all possible choices **Why other options might be incorrect (if applicable):** This depends on the exact options present in the originally shown multiple choice answer area (Hot Area image), but any option not including users, other groups, managed identities, and enterprise applications would be considered incorrect according to the explanation. The full image of the multiple choice options is needed to provide a thorough explanation of what is incorrect.
366
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63506-exam-az-500-topic-4-question-66-discussion/) You have 10 on-premises servers that run Windows Server 2019. You plan to implement Azure Security Center vulnerability scanning for the servers. What should you install on the servers first? A. the Azure Arc enabled servers Connected Machine agent B. the Microsoft Defender for Endpoint agent C. the Security Events data connector in Azure Sentinel D. the Microsoft Endpoint Configuration Manager client **
** A. the Azure Arc enabled servers Connected Machine agent **Explanation:** To enable Azure Security Center vulnerability scanning on on-premises Windows Server 2019 machines, you must first install the Azure Arc enabled servers Connected Machine agent. Azure Arc extends Azure management to on-premises environments, allowing you to manage these servers as if they were in Azure. This is necessary to utilize Azure Security Center's features, including vulnerability scanning. The other options are not the primary prerequisite for enabling this specific Azure Security Center functionality on on-premises servers. **Why other options are incorrect:** * **B. the Microsoft Defender for Endpoint agent:** While Defender for Endpoint offers security features, it's not the primary method for integrating on-premises servers with Azure Security Center for vulnerability scanning. Windows Server 2019 already includes a built-in version of Defender, according to the discussion. * **C. the Security Events data connector in Azure Sentinel:** Azure Sentinel is a security information and event management (SIEM) solution. While it can receive security data, it's not the initial step for enabling Azure Security Center's vulnerability scanning on on-premises servers. * **D. the Microsoft Endpoint Configuration Manager client:** This client is for managing devices within an organization, but doesn't directly enable Azure Security Center vulnerability scanning for on-premises servers. **Note:** The discussion indicates some disagreement on whether Defender for Endpoint is necessary. However, the consensus and the most accurate answer based on the context of enabling Azure Security Center vulnerability scanning is that the Azure Arc agent is the crucial first step.
367
[View Question](https://www.examtopics.com/discussions/databricks/view/63515-exam-az-500-topic-1-question-38-discussion/) Your company's Azure subscription is linked to their Azure Active Directory (Azure AD) tenant. After an internally developed application is registered in Azure AD, you are tasked with making sure that the application has the ability to access Azure Key Vault secrets on application the users' behalf. Solution: You configure a delegated permission with admin consent. Does the solution meet the goal? A. Yes B. No
B. No The solution does not meet the goal. While a delegated permission is necessary for an application to access Key Vault secrets on a user's behalf (a process often referred to as "impersonation"), simply configuring the permission with *admin consent* is insufficient. The correct approach requires granting the application the `user_impersonation` permission, which allows the application to access Key Vault on behalf of the *signed-in user*. Admin consent grants permission for *all* users, while access to Key Vault secrets should be controlled on a per-user basis through access policies which require both the application's Application ID and the user's Object ID. The discussion highlights that the solution requires both the Application ID and Object ID to be specified in the Key Vault access policy for each user. Admin consent only grants the application permission to act on behalf of users, it does not automatically configure the necessary access policies per user. Why other options are incorrect: A. Yes - This is incorrect because, as explained above, granting admin consent alone does not automatically configure the necessary access policies within Key Vault to allow the application to access secrets on behalf of individual users. The process necessitates explicitly adding both the Application ID and the Object ID of the user to the Key Vault access policy.
368
[View Question](https://www.examtopics.com/discussions/databricks/view/63567-exam-az-500-topic-1-question-29-discussion/) Your company has an Azure subscription linked to their Azure Active Directory (Azure AD) tenant. As a Global administrator for the tenant, part of your responsibilities involves managing Azure Security Center settings. You are currently preparing to create a custom sensitivity label. Solution: You start by integrating Security Center and Microsoft Cloud App Security. Does the solution meet the goal? A. Yes B. No
B. No. Integrating Azure Security Center and Microsoft Defender for Cloud Apps (formerly Microsoft Cloud App Security) is not directly related to creating custom sensitivity labels. Creating custom sensitivity labels requires creating a custom information type first to define the sensitive data the label will protect. The provided solution focuses on integrating security tools, which is a separate process. There is a consensus amongst the discussion participants that option B is correct. Why other options are incorrect: A. Yes - This is incorrect. The integration of Security Center and Microsoft Defender for Cloud Apps does not directly facilitate the creation of custom sensitivity labels. The process of creating sensitivity labels involves defining the specific data types needing protection. The suggested answer and the discussion's user insights confirm that this is not the first step in creating a custom sensitivity label.
369
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63576-exam-az-500-topic-4-question-62-discussion/) You have an Azure subscription named Subscription1 that contains a resource group named RG1 and a user named User1. User1 is assigned the Owner role for RG1. You create an Azure Blueprints definition named Blueprint1 that includes a resource group named RG2 as shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0041100001.jpg) You assign Blueprint1 to Subscription1 by using the following settings: * Lock assignment: Read Only * Managed Identity: System assigned For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0041100004.png) **
** No, No, No The discussion strongly suggests that the correct answer is No, No, No. The core reason is that Azure Blueprints' read-only locks, applied through Blueprint1, only affect *newly created* resources within the blueprint. They do *not* retroactively apply to pre-existing resources. Therefore: 1. **Blueprint doesn't work on existing resources:** This is true. The blueprint creates RG2; it doesn't modify RG1. The read-only lock applies only to RG2 *after* it's created by the blueprint. 2. **RG2 is read-only and tags cannot be modified:** This is true in that RG2 is created with a read-only lock. However, the question is ambiguous about whether User1 can modify tags on RG2 *after* the blueprint is applied. The implication is that they cannot. 3. **The newly created RG2 is read-only and nothing can be changed before you changed/deleted blueprint assignment:** This is largely true. The read-only lock prevents modifications until the blueprint assignment is altered or removed. There is some disagreement in the discussion on the nuance of statement 2 (whether a user with appropriate permissions *could* modify the tags despite the read-only lock at a technical level), but the general consensus is the answer is No, No, No given the intent and typical implementation of Azure Blueprints read-only locks. The provided reference document also corroborates this interpretation. **Why other options are incorrect:** The provided discussion does not present other options explicitly; the question is formatted as a True/False (Yes/No) selection for each statement. However, any option suggesting a "Yes" for any of the three statements would be incorrect based on the prevailing understanding of Azure Blueprints' behavior and the discussion's consensus.
370
[View Question](https://www.examtopics.com/discussions/databricks/view/63577-exam-az-500-topic-4-question-63-discussion/) You have an Azure Sentinel deployment. You need to create a scheduled query rule named Rule1. What should you use to define the query rule logic for Rule1? A. a Transact-SQL statement B. a JSON definition C. GraphQL D. a Kusto query
D. a Kusto query Azure Sentinel uses the Kusto Query Language (KQL) for its query logic. Scheduled query rules in Azure Sentinel, therefore, require a Kusto query to define their logic. The provided link from the discussion, https://learn.microsoft.com/en-us/azure/sentinel/detect-threats-custom#define-the-rule-query-logic-and-configure-settings, confirms this. Why other options are incorrect: * **A. a Transact-SQL statement:** Transact-SQL (T-SQL) is used for querying SQL databases, not for Azure Sentinel's data analytics. * **B. a JSON definition:** JSON is a data format, not a query language. While it might be used to configure *aspects* of the rule, the core query logic itself is not defined in JSON. * **C. GraphQL:** GraphQL is a query language for APIs, not for the data analysis performed within Azure Sentinel. Note: The discussion shows overwhelming agreement on the correct answer.
371
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63579-exam-az-500-topic-5-question-45-discussion/) You have an Azure subscription that contains an Azure key vault and an Azure Storage account. The key vault contains customer-managed keys. The storage account is configured to use the customer-managed keys stored in the key vault. You plan to store data in Azure by using the following services: ✑ Azure Files ✑ Azure Blob storage ✑ Azure Table storage ✑ Azure Queue storage Which two services support data encryption by using the keys stored in the key vault? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Table storage B. Azure Files C. Blob storage D. Queue storage **
** B and C (Azure Files and Azure Blob storage) **Explanation:** While the initial understanding was that only Azure Files and Blob storage supported customer-managed keys (CMK) for encryption, the discussion reveals this is outdated. The current configuration allows for selecting which services (Blob, Files, Table, Queue) will use CMKs during the storage account's creation. Therefore, both Azure Files and Azure Blob storage *can* utilize customer-managed keys for encryption. However, it is important to note that this is configurable at the time of storage account creation, not afterward. **Why other options are incorrect:** * **A. Table storage:** Although configurable at the time of storage account creation to use CMKs, it is not inherently supported without that initial configuration. * **D. Queue storage:** Similar to Table storage, Queue storage can be configured to utilize CMKs at the time of storage account creation, but isn't automatically supported. **Note on Disagreement:** The discussion shows a clear evolution of understanding regarding CMK support for Azure storage services. Early responses indicated limited support, but later comments clarified that all four services can be configured to utilize CMKs during storage account creation. The answer reflects the most up-to-date understanding from the discussion.
372
[View Question](https://www.examtopics.com/discussions/databricks/view/63705-exam-az-500-topic-3-question-56-discussion/) You have multiple development teams that will create apps in Azure. You plan to create a standard development environment that will be deployed for each team. You need to recommend a solution that will enforce resource locks across the development environments and ensure that the locks are applied in a consistent manner. What should you include in the recommendation? A. an Azure policy B. an Azure Resource Manager template C. a management group D. an Azure blueprint
D. an Azure blueprint Azure Blueprints are designed to deploy Azure resources and configurations consistently and repeatably, including resource locks (like CanNotDelete and ReadOnly), role assignments, policy assignments, and ARM templates. This directly addresses the requirement of enforcing resource locks consistently across multiple development environments. Why other options are incorrect: * **A. an Azure policy:** While Azure policies can enforce governance rules, they don't directly manage resource locks in the way a blueprint does. Policies focus on compliance and configuration, not the direct application of locks. * **B. an Azure Resource Manager template:** ARM templates define infrastructure as code, but they require separate mechanisms (such as Azure policies or manual steps) to enforce resource locks. They don't inherently provide the consistent, automated locking capability of a blueprint. * **C. a management group:** Management groups provide a hierarchical structure for managing Azure resources, allowing for governance at scale. However, they don't directly implement resource locks; they are a container for managing policies and other governance mechanisms, including blueprints. Note: The discussion shows some disagreement on the best answer, with some users initially suggesting Azure policies or ARM templates. However, the consensus and provided explanation support Azure Blueprints as the most suitable solution for consistently applying resource locks across multiple environments.
373
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63740-exam-az-500-topic-1-question-43-discussion/) Your company has an Azure SQL database that has Always Encrypted enabled. You are required to make the relevant information available to application developers to allow them to access data in the database. Which two of the following options should be made available? Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0003200001.jpg) **
** The correct answer is **Column Encryption Key** and **Column Master Key**. Always Encrypted uses two keys: * **Column Encryption Key (CEK):** This key directly encrypts the data within the database columns. Application developers need this key (or access to it) to decrypt the data they are accessing. * **Column Master Key (CMK):** This key protects the CEK. It's a higher-level key that encrypts the CEK, thus adding an extra layer of security. Application developers also need this (indirectly or through a system that uses it) to access the CEK and then the data. Without both keys (or access to them), application developers cannot access the encrypted data. The CMK is crucial because it protects the CEK; without it, the CEK would be vulnerable. The discussion shows unanimous agreement on this answer.
374
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63813-exam-az-500-topic-3-question-2-discussion/) You have an Azure Container Registry named ContReg1 that contains a container image named image1. You enable content trust for ContReg1. After content trust is enabled, you push two images to ContReg1 as shown in the following table. | Image Name | Content Trust Enabled | |---|---| | image2 | Yes | | image3 | No | Which images are trusted images? A. image1 and image2 only B. image2 only C. image1, image2, and image3 **
** B. image2 only **Explanation:** Content trust, once enabled, only applies to images pushed *after* its activation. `image1` existed before content trust was enabled for ContReg1, therefore it's not considered a trusted image. `image3` explicitly has content trust disabled. Only `image2` was pushed after enabling content trust and with content trust enabled; therefore, only `image2` is a trusted image. **Why other options are incorrect:** * **A. image1 and image2 only:** This is incorrect because image1 was present *before* content trust was enabled. Content trust does not retroactively apply to pre-existing images. * **C. image1, image2, and image3:** This is incorrect because image1 is not a trusted image (pre-existing) and image3 has content trust disabled. **Note:** The discussion section shows a consensus that option B is correct. There is no significant disagreement on the answer.
375
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63883-exam-az-500-topic-5-question-11-discussion/) You have an Azure subscription that contains an Azure key vault named Vault1. On January 1, 2019, Vault1 stores the following secrets. All dates are in mm/dd/yy format. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0047300001.png) When can each secret be used by an application? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** * **Password1:** Never. The image shows Password1 is disabled. A disabled secret cannot be accessed. * **Password2:** Only between March 1, 2019, and May 1, 2019. The image shows Password2 has a `NotBefore` date of 03/01/19 and an `Expires` date of 05/01/19. Secrets are only accessible within the timeframe defined by these attributes. **Explanation:** The question tests understanding of Azure Key Vault secret attributes, specifically "Enabled," "NotBefore," and "Expires." The provided images clearly show the status and attributes of each secret. The correct answer directly reflects the interpretation of these attributes. **Why other options are incorrect:** There is disagreement in the discussion forum regarding Password2. Some users believe it's always accessible, ignoring the `NotBefore` and `Expires` attributes. However, the provided documentation and the suggested answer support the interpretation that the `NotBefore` and `Expires` attributes *do* control accessibility. The user `dmlists`'s comment supporting the suggested answer, noting that a disabled secret returns an error while an expired secret returns a value, further validates the answer.
376
**** [View Question](https://www.examtopics.com/discussions/databricks/view/64250-exam-az-500-topic-4-question-30-discussion/) You plan to use Azure Sentinel to create an analytic rule that will detect suspicious threats and automate responses. Which components are required for the rule? Select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. (Image of a Hotspot question is missing but based on the discussion the options were implicitly present and implied to be "A Kusto Query Language (KQL) query" and "An Azure Sentinel Playbook".) **
** The required components for an Azure Sentinel analytic rule to detect suspicious threats and automate responses are a Kusto Query Language (KQL) query and an Azure Sentinel Playbook. * **KQL Query:** This is used to define the logic that detects suspicious activities. The query searches through the data ingested into Azure Sentinel to identify events matching the defined threat patterns. * **Azure Sentinel Playbook:** This automates the response to the detected threat. Once the KQL query identifies a threat, the playbook executes pre-defined actions, such as sending alerts, blocking malicious IP addresses, or initiating incident investigations. The discussion overwhelmingly confirms these two components as the correct answer. **Why other options are incorrect:** The question doesn't provide alternative options, but the discussion implies that only a KQL query and a playbook are necessary. Any other components would likely be superfluous for creating the basic rule. There's no indication from the provided text of other required components.
377
[View Question](https://www.examtopics.com/discussions/databricks/view/65818-exam-az-500-topic-5-question-37-discussion/) DRAG DROP - You have an Azure subscription. You plan to create a storage account. You need to use customer-managed keys to encrypt the tables in the storage account. From Azure Cloud Shell, which three cmdlets should you run in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0051300001.png)
The provided suggested answer image (https://www.examtopics.com/assets/media/exam-media/04258/0051300002.png) shows the correct sequence as `New-AzStorageAccount`, `Add-AzKeyVaultKey`, and `Set-AzStorageAccountKey`. However, the discussion highlights significant disagreement about the accuracy of this question and answer. The original suggested answer includes `New-AzStorageAccountKey`, which is incorrect according to the discussion. Users correctly point out that customer-managed keys for encryption require interaction with Azure Key Vault using `Add-AzKeyVaultKey`, not generating new storage account keys with `New-AzStorageAccountKey`. The `Set-AzStorageAccountKey` cmdlet is also questionable in this context, as it would typically be used to update existing keys, not establish initial encryption. The consensus from the discussion is that the question and its suggested answer are outdated or flawed and do not accurately reflect the process of setting up customer-managed keys for Azure storage encryption. WHY OTHER OPTIONS ARE INCORRECT (based on discussion): * **`New-AzStorageAccountKey`**: This cmdlet is used to manage *storage account access keys*, not customer-managed keys used for encryption. Using this would not achieve the goal of using customer-managed keys for table encryption. * The order and specific choice of cmdlets are debated and likely inaccurate according to user comments. The exact steps required to use customer-managed keys for table encryption might involve additional cmdlets or a different sequence of actions than what's presented. **Note:** This question and its suggested answer are highly contested within the discussion section. The provided answer reflects the *suggested* solution, but the comments strongly suggest inaccuracies within the question itself. Further research into the current Microsoft documentation on this topic is recommended.
378
[View Question](https://www.examtopics.com/discussions/databricks/view/6699-exam-az-500-topic-3-question-38-discussion/) You have the Azure virtual machines shown in the following table. | VM Name | Resource Group | Region | | -------- | --------------- | ------------- | | VM1 | RG1 | East US | | VM2 | RG2 | West US | | VM3 | RG3 | North Europe | | VM4 | RG1 | East US | You create an Azure Log Analytics workspace named Analytics1 in RG1 in the East US region. Which virtual machines can be enrolled in Analytics1? A. VM1 only B. VM1, VM2, and VM3 only C. VM1, VM2, VM3, and VM4 D. VM1 and VM4 only
C. VM1, VM2, VM3, and VM4 The discussion reveals conflicting opinions on whether Log Analytics workspaces are region-specific for VM enrollment. While some argue that VMs must be in the same region as the workspace, others correctly state that VMs from any region can be enrolled. The Microsoft documentation cited in the discussion supports the latter; VMs can be deployed from any region and are not limited to the regions supported by the Log Analytics workspace. Therefore, all four VMs (VM1, VM2, VM3, and VM4) can be enrolled in the Analytics1 workspace. WHY OTHER OPTIONS ARE INCORRECT: * **A. VM1 only:** This is incorrect because it excludes VMs in other regions that can still be monitored by the Log Analytics workspace. * **B. VM1, VM2, and VM3 only:** This is incorrect for the same reason as option A; it excludes VM4, which is in the same region as the workspace and therefore enrollable. * **D. VM1 and VM4 only:** This incorrectly limits enrollment to only VMs within the same region as the workspace. Note: The discussion shows disagreement on the correct answer. However, the weight of evidence and the linked Microsoft documentation point to option C being the correct answer, despite conflicting opinions within the discussion.
379
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67441-exam-az-500-topic-1-question-33-discussion/) After creating a new Azure subscription, you are tasked with making sure that custom alert rules can be created in Azure Security Center. You have created an Azure Storage account. Which of the following is the action you should take? A. You should make sure that Azure Active Directory (Azure AD) Identity Protection is removed. B. You should create a DLP policy. C. You should create an Azure Log Analytics workspace. D. You should make sure that Security Center has the necessary tier configured. **
** C. You should create an Azure Log Analytics workspace. Azure Security Center relies on Azure Log Analytics to store and process security data, which is then used to generate alerts, including custom ones. Creating a Log Analytics workspace provides the necessary infrastructure for Security Center to function correctly and allow the creation of custom alert rules. **Why other options are incorrect:** * **A. You should make sure that Azure Active Directory (Azure AD) Identity Protection is removed:** Azure AD Identity Protection is unrelated to the functionality of creating custom alert rules within Azure Security Center. Removing it won't enable this feature. * **B. You should create a DLP policy:** Data Loss Prevention (DLP) policies focus on preventing data breaches and exfiltration. While important for security, they are separate from the mechanism for creating custom alert rules in Security Center. * **D. You should make sure that Security Center has the necessary tier configured:** While a proper Security Center tier is required for various features, the question focuses specifically on creating *custom* alert rules. Even with a paid tier, the Log Analytics workspace is still a prerequisite. The necessary tier would likely enable more advanced alert capabilities, but the fundamental ability to create custom alerts still relies on the Log Analytics workspace. **Note:** The provided discussion indicates that this question appeared on an exam on January 8th, 2023. There is no evidence of disagreement on the correct answer within the provided discussion.
380
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67452-exam-az-500-topic-6-question-2-discussion/) HOTSPOT - You need to ensure that the Azure AD application registration and consent configurations meet the identity and access requirements. What should you use in the Azure portal? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0003800001.png) *(Image shows a blank answer area for a multiple choice question)* **
** The correct answer, based on the provided information and taking into account the conflicting opinions, is to navigate to **Azure Active Directory > Enterprise applications > Consent and permissions > User consent settings**. There is disagreement in the discussion regarding the exact location in the Azure portal, with some users pointing to outdated information ("User settings" which is now "App Launchers" according to some comments). However, the most recent and upvoted comments suggest the path above as the current, correct location for managing user consent settings related to Azure AD application registration. **WHY OTHER OPTIONS ARE INCORRECT (or potentially outdated):** The discussion highlights several outdated or incorrect paths, including: * **Azure AD > User settings:** This appears to be an outdated path, and at least one comment explicitly states that the functionality has moved. The current location is considered to be under "App Launchers". * Other variations mentioned in the comments also seem to be outdated or incorrect according to other comments. **NOTE:** The provided discussion reveals conflicting information about the location of the relevant settings within the Azure portal. The answer provided reflects the most up-to-date information available in the discussion, but users should consult the official Microsoft documentation for the most definitive and current information.
381
[View Question](https://www.examtopics.com/discussions/databricks/view/67472-exam-az-500-topic-11-question-4-discussion/) You plan to implement JIT VM access. Which virtual machines will be supported? A. VM2, VM3, and VM4 only B. VM1, VM2, VM3, and VM4 C. VM1 and VM3 only D. VM1 only
A. VM2, VM3, and VM4 only Explanation: The overwhelming consensus in the discussion is that only VM2, VM3, and VM4 are supported because they have Network Security Groups (NSGs) configured. JIT VM access requires an NSG; the lack of an NSG for VM1 is repeatedly cited as the reason it's excluded. Why other options are incorrect: * **B. VM1, VM2, VM3, and VM4:** This is incorrect because VM1 does not have an NSG, a prerequisite for JIT VM access, according to the discussion. * **C. VM1 and VM3 only:** This is incorrect because it incorrectly includes VM1 and excludes VM2 and VM4. The discussion indicates VM2 and VM4 also have NSGs. * **D. VM1 only:** This is incorrect because the discussion clearly states that VM1 lacks the required NSG for JIT VM access. Note: While the discussion strongly supports option A, there's a lack of explicit details regarding the VM configurations and network setup in the provided text. The answer is based solely on the user comments which reference NSGs as a necessary condition.
382
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67520-exam-az-500-topic-2-question-25-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains the users shown in the following table. |User|Location|Group1|Group2| |---|---|---|---| |User1|Boston|Yes|Yes| |User2|Seattle|No|Yes| The tenant contains the named locations shown in the following table. |Location| |---| |Boston| |Seattle| You create the conditional access policies for a cloud app named App1 as shown in the following table. |Policy|Users and groups|Locations|Grant| |---|---|---|---| |Policy 1|Group 1|All|Block| |Policy 2|Group 1|All|Allow| |Policy 3|Group 2|Boston|Block| |Policy 4|Group 2|All|Require MFA| For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 1. User1 will be granted access to App1. 2. User2 will be granted access to App1. **
** 1. **Yes**. User1 is a member of both Group1 and Group2. Policy 1 (Block access for Group 1) is overridden by Policy 2 (Allow access for Group 1), as the order of policies matters. Since policy 2 allows access, and User1 is a member of Group 1, User1 will be granted access. Policies 3 and 4 don't apply because they target Group 2. 2. **Yes**. User2 is a member of Group 2 but not Group 1. Policy 1 and 2 do not apply. Policy 3 (Block access for Group 2 in Boston) does not apply because User2 is in Seattle. Policy 4 (Require MFA for Group 2) applies, granting User2 access to App1 with MFA. **Why other options are incorrect (as implied by the provided discussion and suggested answers):** The discussion highlights the importance of policy order and the prioritization of exclusion over inclusion. Incorrect answers would stem from misinterpreting the policy order or incorrectly applying the location-based restrictions. The suggested answer (the image) supports the above provided "yes" answers. Note that there is some ambiguity and disagreement in the provided user insights discussion regarding User2 access. While the final conclusion of "allowed" is provided, there is initial suggestion that User2 might be blocked. The final answer above prioritizes the suggested answer and clear logic of policy application.
383
[View Question](https://www.examtopics.com/discussions/databricks/view/67597-exam-az-500-topic-2-question-30-discussion/) You need to create a new Azure Active Directory (Azure AD) directory named 12345678.onmicrosoft.com and a user named User1 in the new directory. To complete this task, sign in to the Azure portal. Provide step-by-step instructions on how to accomplish this. The images below show parts of the process. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0010500001.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0010500002.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0010600001.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0010700001.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0010700002.jpg) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0010800001.png)
To create the Azure AD directory and user, follow these steps: **Step 1: Create the Azure AD Tenant** 1. Sign in to the Azure portal with an account that has an Azure subscription. 2. Click the "+" icon (Add) and search for "Azure Active Directory". 3. Select "Azure Active Directory" from the search results. 4. Click "Create". 5. Provide an "Organization name" and the "Initial domain name" as `12345678`. Click "Create". This will create your new directory. 6. Once the directory is created, select the information box to manage your new directory. **Step 2: Create the User** 1. Ensure you are on the Azure Active Directory flyout in the Azure portal. 2. Under "Manage", select "Users". 3. Select "All users", then click "+ New user". 4. Provide a "Name" (e.g., User1) and a "User name" (e.g., `user1@12345678.onmicrosoft.com`). You can optionally set a temporary password. Click "Create". This process creates both the Azure AD directory and the specified user within it. The provided images visually represent these steps. Note: The discussion thread shows some variation in the exact steps for creating a new tenant, with some users suggesting navigating through "Manage tenants" first. The answer above reflects the steps illustrated in the provided images, which appear to be the most straightforward and widely accepted method.
384
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67599-exam-az-500-topic-10-question-3-discussion/) HOTSPOT - You need to delegate the creation of RG2 and the management of permissions for RG1. Which users can perform each task? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0005200001.jpg) *(Image shows a table with columns "User","Role", and "Resource Group"; rows list Admin1 (User Access Administrator), Admin2 (Contributor), Admin3 (Contributor), Admin4 (Owner) and their respective resource group assignments)* **
** Box 1: Admin3 only Box 2: Admin1 and Admin4 only **Explanation:** * **Box 1 (Creating RG2):** Only Admin3 has the Contributor role. The Contributor role has the necessary permissions to create a new resource group. Admin2 is also a contributor, but the image shows they are assigned to RG1, not RG2. * **Box 2 (Managing permissions for RG1):** Admin4 has the Owner role for RG1, granting them full control, including permission management. The discussion highlights disagreement regarding Admin1's capabilities. While Admin1 has the "User Access Administrator" role, which might seem relevant, the consensus among many commenters is that this role is not sufficient for managing permissions directly on an existing resource group. However, some users argued that the User Access Administrator role *does* allow for permission management due to inheritance. Therefore, the correct answer *depends on interpretation* of the User Access Administrator role. The provided answer reflects the majority view in the discussion that only Admin 1 and Admin 4 can complete this task. **Why other options are incorrect:** The suggested answer in the original post incorrectly excludes Admin1 from Box 2. Several commenters correctly pointed out that the User Access Administrator role (Admin1) *may* allow permission management, conflicting with the initial suggested answer and leading to multiple interpretations. The various alternative answers within the discussion reflect this disagreement over the exact permissions granted by the User Access Administrator role.
385
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67600-exam-az-500-topic-11-question-3-discussion/) HOTSPOT - You implement the planned changes for ASG1 and ASG2. In which NSGs can you use ASG1, and the network interfaces of which virtual machines can you assign to ASG2? Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0019700001.jpg) *(Image shows a diagram with NSGs (Network Security Groups) NSG1 and NSG2, and VMs (Virtual Machines) VM1, VM2, and VM3. Connections are depicted, indicating which VMs are associated with which NSGs.)* **
** Based on the provided information and the discussion, the answer is nuanced and there is some disagreement among users. * **ASG1:** ASG1 can only be used with NSG2. This is because all network interfaces in an Application Security Group (ASG) must reside within the same virtual network as the first network interface added to the ASG. Since the image (though not fully shown) implies VM2 (and thus its network interface) is the first member of ASG1 and is associated with NSG2, any subsequent additions to ASG1 must also be on the same network (and thus under the same NSG). * **ASG2:** The network interfaces of any of the VMs (VM1, VM2, or VM3) could theoretically be assigned to ASG2. The critical constraint is that all VMs in the ASG must be in the same virtual network. Since ASG2 is initially empty, the first VM added to ASG2 dictates the virtual network for the entire ASG. **Why other options are incorrect (based on the discussion):** The discussion highlights some confusion and differing opinions, but the prevailing correct answer comes from understanding the Microsoft documentation regarding ASGs and virtual networks. Some users incorrectly suggest ASG1 could be used with any NSG, ignoring the core constraint of all network interfaces within an ASG being on the same virtual network. The understanding about ASG2's flexibility initially hinges on the fact that no VM is associated with it yet – once the first VM is added, subsequent additions will be restricted to the same virtual network. **Note:** The image is crucial for a definitive answer. The discussion does not perfectly clarify the VM/NSG relationships shown in the missing image; thus, this answer is based on a reasonable interpretation of the text. A clear image would remove any ambiguity.
386
[View Question](https://www.examtopics.com/discussions/databricks/view/67697-exam-az-500-topic-2-question-9-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant contains the users shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0006600002.png) You configure an access review named Review1 as shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0006700001.jpg) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0006800001.png)
Box 1: User3 only Box 2: User3 will receive a confirmation request **Explanation:** * **Box 1:** The access review is configured with "Members (self)" meaning only the users themselves (in this case, User3) will be reviewing their access. User2's access is not part of this review. * **Box 2:** The provided images lack crucial information about the "Advanced Settings," specifically whether email notifications are enabled. However, the discussion heavily suggests that even if "auto-apply results to resource" is disabled (as mentioned by siecz), User3 will receive a confirmation request because the email notification is likely enabled (as indicated by jantoniocesargatica). The setting "Should reviewer not respond" only affects what happens if a user *doesn't* respond within the review period, it does not prevent a confirmation request from being sent. If the reviewer denies access, the access will be removed. The other options (No change, Remove access, Approve access, Take recommendations) are not applicable, as the question asks what will happen if the user does not respond to the confirmation request. **Why other options are incorrect:** The options for Box 1 are incorrect because the access review is configured for self-review by User3 only. The options for Box 2 are incorrect because they describe actions taken *if* a user responds or the system automatically applies a decision. User3 will receive a request regardless of these actions since the advanced setting, as inferred by the discussion, is set to send email notifications. **Note:** There is a significant disagreement and uncertainty in the discussion regarding the missing "Advanced Settings" image and the impact of email notifications on the confirmation request. The answer provided relies heavily on the consensus built within the discussion, recognizing the incomplete information in the original question.
387
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67702-exam-az-500-topic-2-question-21-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com that contains the users shown in the following table. | User Name | User Principal Name (UPN) | |--------------|-----------------------------| | User1 | User1@contoso.com | | User2 | User2@contoso.com | | User3 | User3@contoso.com | Azure AD Privileged Identity Management (PIM) is used in contoso.com. In PIM, the Password Administrator role has the following settings: * Maximum activation duration (hours): 2 * Send email notifying admins of activation: Disable * Require incident/request ticket number during activation: Disable * Require Azure Multi-Factor Authentication for activation: Enable * Require approval to activate this role: Enable * Selected approver: Group1 You assign users the Password Administrator role as shown in the following table. | User Name | Role Assignment Type | |--------------|----------------------| | User1 | Active | | User2 | Eligible | | User3 | Eligible | User3 is a member of Group1. For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. User1 can use the Password Administrator role without performing any additional actions. 2. User2 can use the Password Administrator role without enabling Multi-Factor Authentication (MFA). 3. User3 can approve their own request to activate the Password Administrator role. **
** The correct answer is Yes, Yes, No. 1. **Yes:** User1 has an "Active" assignment. Active assignments grant immediate access to the role without requiring any further action from the user. 2. **Yes:** While MFA is required for activation, User2's eligibility means they *can* request the role. The question asks if they can use the role *without* enabling MFA. They can request it, but to actually *use* the role they will need to enable MFA. Note there is some ambiguity in the question wording, but this interpretation seems to be what was intended and is supported by the provided suggested answer. 3. **No:** User3 is a member of the selected approver group (Group1). However, PIM does not allow self-approval. User3 would need another member of Group1 to approve their activation request. **Why other options are incorrect:** There are no other options presented in the original question beyond the three boolean (Yes/No) statements. The explanation above justifies why each of the three provided answers is correct based on the provided information and the functionality of Azure AD Privileged Identity Management (PIM). The discussion suggests there might be some ambiguity around interpretation of the second question, specifically around the difference between "requesting" and "using" the role.
388
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67745-exam-az-500-topic-2-question-70-discussion/) You have an Azure subscription that contains an app named App1. App1 has the app registration shown in the following table. | App ID | Client ID | | :-------------------------------------- | :-------------------------------------------- | | d6c6440f-c20e-41a3-b178-a0f7297a1d6c | 2f5a06e2-3011-4780-9b8d-a34285f92b71 | You need to ensure that App1 can read all user calendars and create appointments. The solution must use the principle of least privilege. What should you do? A. Add a new Delegated API permission for Microsoft.Graph Calendars.ReadWrite. B. Add a new Application API permission for Microsoft.Graph Calendars.ReadWrite. C. Select Grant admin consent. D. Add new Delegated API permission for Microsoft.Graph Calendars.ReadWrite.Shared. **
** B. Add a new Application API permission for Microsoft.Graph Calendars.ReadWrite. **Explanation:** The requirement is for App1 to access *all* user calendars. Delegated permissions (options A and D) only grant access to the calendars of the user who is currently logged in. Application permissions (option B), on the other hand, allow the application itself to access resources regardless of who is logged in. Therefore, to meet the requirement of accessing all user calendars, an Application permission is necessary. Option C, granting admin consent, is not directly related to the permissions themselves but rather to the consent process. Using the principle of least privilege, granting only the necessary Application permission (Calendars.ReadWrite) is the most secure approach. **Why other options are incorrect:** * **A and D:** These options use delegated permissions, which are inappropriate because they only grant access on behalf of a specific user, not all users. * **C:** Granting admin consent is not a permission; it's a step in the process of granting permissions. While it might be necessary to deploy the app, it doesn't define what permissions the app gets. **Note:** The discussion thread shows some disagreement on the correct answer. Some users argue for delegated permissions, misunderstanding the requirement to access *all* user calendars. The chosen answer reflects the correct interpretation that application permissions are required to achieve this, adhering to the principle of least privilege.
389
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67814-exam-az-500-topic-2-question-56-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains the resources shown in the following table. (Image showing a table of users and groups is omitted as it is not directly relevant to the question's core logic). User2 is the owner of Group2. The user and group settings for App1 are configured as shown in the following exhibit. (Image omitted). You enable self-service application access for App1 as shown in the following exhibit. (Image omitted). User3 is configured to approve access to App1. After you enable self-service application access for App1, who will be configured as the Group2 owner and who will be configured as the App1 users? **
** * **Group2 owner:** User3 * **App1 users:** Group1 and Group2 members Enabling self-service application access for App1, and assigning User3 as the approver, results in User3 becoming the owner of Group2. The App1 users remain as initially configured: members of Group1 and Group2. This is confirmed by multiple users in the discussion who tested this scenario in the Azure portal. **Why other options are incorrect:** There are no explicitly stated alternative options in the provided text, but the discussion strongly suggests that any answer differing from the one given above would be incorrect based on practical testing. The question is specifically about the impact of enabling self-service application access and the role of the approver (User3) in this scenario. The fact that many users validated this in a lab environment strongly indicates that there is no other correct answer. The discussion does not show any disagreement about the correct answer, only some clarifications on the reasons behind it.
390
[View Question](https://www.examtopics.com/discussions/databricks/view/67820-exam-az-500-topic-2-question-64-discussion/) You have an Azure subscription that uses Azure Active Directory (Azure AD) Privileged Identity Management (PIM). A PIM user that is assigned the User Access Administrator role reports receiving an authorization error when performing a role assignment or viewing the list of assignments. You need to resolve the issue by ensuring that the PIM service principal has the correct permissions for the subscription. The solution must use the principle of least privilege. Which role should you assign to the PIM service principal? A. Contributor B. User Access Administrator C. Managed Application Operator D. Resource Policy Contributor
B. User Access Administrator The PIM service principal requires the User Access Administrator role to function correctly. This allows it to perform role assignments and view assignment lists within Azure AD PIM. Assigning any other role would likely be insufficient or grant excessive permissions, violating the principle of least privilege. WHY OTHER OPTIONS ARE INCORRECT: * **A. Contributor:** This role provides broad access to resources, exceeding the principle of least privilege. The PIM service principal only needs permission to manage role assignments, not all resources. * **C. Managed Application Operator:** This role is for managing applications, not user access. It's irrelevant to the problem. * **D. Resource Policy Contributor:** This role manages resource policies, not user access assignments. NOTE: The discussion highlights significant confusion regarding the clarity of the question's wording. Several users expressed difficulty understanding the question, suggesting potential ambiguity in the original phrasing. The provided answer is based on the most widely accepted interpretation in the discussion, but the ambiguity should be noted.
391
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67823-exam-az-500-topic-2-question-68-discussion/) You have an Azure subscription that contains the resources shown in the following table. (Image of resource table omitted as content is not directly relevant to the question) The subscription is linked to an Azure Active Directory (Azure AD) tenant that contains the users shown in the following table. (Image of user table omitted as content is not directly relevant to the question) You create the groups shown in the following table. (Image of group table omitted as content is not directly relevant to the question) The membership rules for Group1 and Group2 are configured as shown in the following exhibit. (Image of membership rules omitted as content is not directly relevant to the question) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. *Statement 1:* User1 is a member of Group1. *Statement 2:* User2 is a member of Group2. *Statement 3:* The managed identity of VM1 is a member of Group2. **
** Yes, No, No * **Statement 1 (Yes):** User1's properties satisfy the membership rules for Group1. The rules state that users are added if their account is enabled OR their usage location is US. Since User1's account is enabled, they are a member of Group1. * **Statement 2 (No):** The provided information is insufficient to definitively determine if User2 is a member of Group2. While User2's usageLocation is US, we don't know if their account is enabled. The membership rule for Group2 requires *either* enabled account *or* US location. Since we lack information on User2's account status, we cannot confirm membership. The discussion reflects this uncertainty. * **Statement 3 (No):** Managed identities are separate entities from Azure AD users and are not directly members of Azure AD groups. Therefore, the managed identity of VM1 cannot be a member of Group2 (or any other Azure AD group). **Why other options are incorrect:** The discussion shows some disagreement on the interpretation of User2's membership. However, the correct answer relies on a precise interpretation of the "or" condition in the group membership rules and the lack of complete User2 information. The consensus from multiple users suggests that a definitive "yes" or "no" for User2's membership in Group2 cannot be reliably determined based on the given information.
392
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67939-exam-az-500-topic-2-question-71-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains the users shown in the following table. | User | Group1 | Group2 | MFA Enabled | MFA Enforced | Sign-in risk level | | :---- | :------ | :------ | :---------- | :------------ | :----------------- | | User1 | Yes | No | Yes | No | Low | | User2 | Yes | Yes | No | No | High | You create and enforce an Azure AD Identity Protection sign-in risk policy that has the following settings: * Assignments: Include Group1, exclude Group2 * Conditions: Sign-in risk level: Low and above * Access: Allow access, Require multi-factor authentication You need to identify what occurs when the users sign in to Azure AD. What should you identify for each user? **
** * **User1:** Will be prompted for MFA but will not be blocked. The policy includes Group1, but excludes Group2. Since User1 is in Group1 and not Group2, the exclusion takes precedence over the inclusion. However, while the policy requires MFA, it's not *enforced* for User1. Therefore, they are prompted to register or use MFA, but not blocked from signing in without it. * **User2:** Will be blocked. User2 is in both Group1 and Group2. The exclusion of Group2 overrides the inclusion of Group1, meaning this policy does not apply to User2. However, the sign-in risk level is High, and even without the policy, a high sign-in risk would trigger other security measures, likely leading to account blocking. The lack of MFA enabled further contributes to the blockage. **Why other options are incorrect:** The discussion shows disagreement on whether User1 will be blocked or only prompted for MFA. Some believe the exclusion from Group 2 entirely prevents the policy from applying, while others argue that the "Require MFA" setting still prompts the user even if not enforced. The correct answer reflects the likely outcome based on the policy settings and common Azure AD behavior. The consensus leans towards User1 being prompted for MFA, while User2 is almost certainly blocked due to the high sign-in risk and lack of MFA.
393
**** [View Question](https://www.examtopics.com/discussions/databricks/view/68012-exam-az-500-topic-4-question-68-discussion/) You have an Azure subscription named Sub1 that contains an Azure Policy definition named Policy1. Policy1 has the following settings: ✑ Definition location: Tenant Root Group ✑ Category: Monitoring You need to ensure that resources that are noncompliant with Policy1 are listed in the Azure Security Center dashboard. What should you do first? A. Change the Category of Policy1 to Security Center. B. Add Policy1 to a custom initiative. C. Change the Definition location of Policy1 to Sub1. D. Assign Policy1 to Sub1. **
** B. Add Policy1 to a custom initiative. **Explanation:** To have non-compliant resources listed in the Azure Security Center dashboard, the policy needs to be part of a custom initiative. Azure Security Center integrates with Azure Policy initiatives to display the compliance status of resources. Simply assigning the policy (option D) to the subscription doesn't guarantee visibility in Security Center; it must be part of a structured initiative for that integration. Changing the category (option A) or definition location (option C) won't directly affect Security Center's display of non-compliant resources. **Why other options are incorrect:** * **A. Change the Category of Policy1 to Security Center:** While it might seem logical, changing the category doesn't automatically integrate the policy with Azure Security Center's display mechanism. * **C. Change the Definition location of Policy1 to Sub1:** This changes where the policy definition is stored, but doesn't address how the policy's compliance results are shown in Security Center. * **D. Assign Policy1 to Sub1:** While assignment is necessary, it's insufficient. The policy must be part of a custom initiative to appear in the Security Center dashboard. The discussion highlights that the policy is already assigned at the Tenant Root Group level. **Note:** The discussion shows some disagreement on the correct answer, with some users suggesting option D. However, the most comprehensive explanation and the provided Microsoft documentation support option B as the most accurate solution for ensuring visibility of non-compliant resources in Azure Security Center.
394
[View Question](https://www.examtopics.com/discussions/databricks/view/68016-exam-az-500-topic-4-question-69-discussion/) You have an Azure subscription. You plan to create a workflow automation in Azure Security Center that will automatically remediate a security vulnerability. What should you create first? A. an automation account B. a managed identity C. an Azure logic app D. an Azure function app E. an alert rule
C. An Azure Logic App Explanation: The suggested answer and the supporting discussion from user `stonwall12` correctly identifies that an Azure Logic App should be created first. Azure Security Center's workflow automation leverages Logic Apps as its underlying platform. The Logic App defines the workflow and triggers the remediation process. Creating the Logic App first establishes the overall automation framework before configuring other supporting components. Why other options are incorrect: * **A. an automation account:** While an automation account is necessary for executing remediation scripts (as pointed out by `golitech`), it's not the *first* thing to create. The Logic App orchestrates the process; the automation account is a component *within* that process. * **B. a managed identity:** A managed identity is used for authentication and authorization, but it's not the primary component defining the workflow's structure. * **D. an Azure function app:** Azure Functions are also capable of automation, but in this context, the question explicitly refers to workflow automation within Azure Security Center, which uses Logic Apps. * **E. an alert rule:** Alert rules detect vulnerabilities. The automation *responds* to those alerts; it's not the initial step in building the automation itself. Note: There is a disagreement between users `stonwall12` and `golitech` on the correct answer. `golitech` argues that an Automation Account should be created first because it hosts the remediation scripts. However, the primary function described in the question is the *workflow automation*, for which the Logic App is the foundational element in Azure Security Center. The Automation Account is a secondary, supporting component within the Logic App's workflow.
395
**** [View Question](https://www.examtopics.com/discussions/databricks/view/68019-exam-az-500-topic-5-question-49-discussion/) You have an Azure subscription that contains an Azure key vault named KeyVault1 and the virtual machines shown in the following table. | VM Name | VNet | IP Address | |---|---|---| | VM1 | VNET1 | 10.0.0.4 | | VM2 | VNET2 | 10.1.0.4 | You set the Key Vault access policy to Enable access to Azure Disk Encryption for volume encryption. KeyVault1 is configured as shown in the following exhibit. *(Note: The image showing KeyVault1 configuration is not provided in the text, but it's crucial to the question.)* For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. *(Note: The image showing the statements to evaluate is not provided in the text, but based on the discussion, it likely presents statements regarding VM1 and VM2's access to KeyVault1 for Azure Disk Encryption.)* **
** The provided text lacks the crucial statements to be evaluated. However, based on the discussion, the correct answer is likely: 1. Yes, 2. No, 3. Yes. VM1, being on the same VNET and having explicit or implicit Network access to Key Vault 1, would have access. VM2, being on a different VNET, would not, unless specifically permitted access, even with ADE enabled. However, Azure Disk Encryption (ADE) is a trusted Microsoft service, which often bypasses network restrictions, thus VM2 could still utilize ADE. **Explanation:** The discussion shows a disagreement amongst users. Some users believe that only VM1, on the same VNET as the Key Vault, would have access, while others argue that ADE bypasses network restrictions allowing VM2 access to the Key Vault for encryption purposes despite being on a different VNET. The solution depends heavily on the exact phrasing of the statements in the missing "Hot Area" image. **Why other options are incorrect (Hypothetical):** Without the actual statements, it's impossible to definitively state why other options would be incorrect. However, if a statement asserted that VM2 had direct network access to Key Vault 1 without explicit network rules or ADE exception, this would be false. Similarly, a statement that claimed ADE is completely blocked by network restrictions would also be false. The correctness of any answer hinges entirely on the content of the missing statements.
396
**** [View Question](https://www.examtopics.com/discussions/databricks/view/68045-exam-az-500-topic-5-question-39-discussion/) You have an Azure subscription that contains the storage accounts shown in the following table. | Storage Account Name | Storage Account Type | |---|---| | storage1 | Azure Blob Storage | | storage2 | Azure Files (SMB) | | storage3 | Azure Table Storage | You need to configure authorization access. Which authorization types can you use for each storage account? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** The correct authorization types for each storage account are: * **storage1 (Azure Blob Storage):** Shared Key, Shared Access Signature (SAS), and Azure Active Directory (AAD) * **storage2 (Azure Files SMB):** Shared Key only * **storage3 (Azure Table Storage):** Shared Key, Shared Access Signature (SAS), and Azure Active Directory (AAD) **Explanation:** This answer is supported by multiple users in the discussion thread and aligns with Microsoft documentation on Azure storage authorization. Azure Blob and Table storage support all three authorization methods: Shared Key, SAS, and AAD. However, Azure Files (SMB) only supports Shared Key authorization. **Why other options are incorrect:** Any answer that doesn't adhere to the above breakdown for each storage type would be incorrect. For instance, including SAS or AAD for Azure Files (SMB) would be wrong. **Note on Disagreement:** While the overall consensus in the discussion supports the provided answer, slight variations in phrasing exist. However, the core concept of which authorization methods are supported by each storage type remains consistent.
397
[View Question](https://www.examtopics.com/discussions/databricks/view/68065-exam-az-500-topic-7-question-1-discussion/) You need to recommend which virtual machines to use to host App1. The solution must meet the technical requirements for KeyVault1. Which virtual machines should you use? A. VM1 only B. VM1, VM2, VM3, and VM4 C. VM1 and VM2 only D. VM1, VM2, and VM4 only
B. VM1, VM2, VM3, and VM4 Explanation: All VMs can access KeyVault1 through a private endpoint within VNET1/Subnet1. Since all VNETs are peered, traffic traverses the Microsoft backbone network without public internet exposure. While there's a discussion about transitive routing (requiring User Defined Routes, or UDRs, for VMs in different regions to communicate reliably across peered networks), the question doesn't explicitly state regional differences or lack of UDRs. Therefore, based solely on the provided information and the highly upvoted response in the discussion, all VMs can access the KeyVault. Why other options are incorrect: * **A. VM1 only:** This is incorrect because it limits access to only one VM when all VMs *could* access the KeyVault given the peering arrangement. * **C. VM1 and VM2 only:** This is incorrect for the same reason as A; it excludes VMs that could potentially access the KeyVault. * **D. VM1, VM2, and VM4 only:** Similar to options A and C, this option excludes a VM that *could* potentially access the KeyVault, depending on network configuration and the existence of necessary routing tables. Note: The discussion highlights a potential discrepancy. While the top-voted response indicates all VMs can access the KeyVault due to VNet peering, another comment points out that transitive routing isn't default for regionally peered VNets, meaning VMs in different regions might not reach the KeyVault without specific routing configurations (UDRs). The answer above reflects the consensus of the top-voted response, but the possibility of needing additional configuration for full connectivity should be noted.
398
**** [View Question](https://www.examtopics.com/discussions/databricks/view/68073-exam-az-500-topic-2-question-67-discussion/) You have an Azure subscription named Subscription1 that contains the resources shown in the following table. (Image of a table showing resource groups, VMs, etc. The exact content of the table is not provided in the discussion and is therefore omitted). You create a custom RBAC role in Subscription1 by using the following JSON file. (Image of a JSON file defining a custom role named "Role1" with actions: "*/read", "Microsoft.Compute/*". The exact content of the JSON is not provided in the discussion and is therefore omitted). You assign Role1 to User1 on RG1. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image of a table with three statements. The exact statements are not provided in the discussion and are therefore omitted, but the answers are to be Yes or No). **
** No, No, No. The overwhelming consensus in the discussion points to "No, No, No" as the correct answer. The reasoning provided is that "*/read" grants read access to all resources, but "Microsoft.Compute/*" alone is insufficient to grant specific actions within the Compute resource provider. The format for specifying actions requires a more precise definition including the resource type and the specific action (e.g., `Microsoft.Compute/virtualMachines/read`). Therefore, User1 likely does not have the permissions indicated in the missing statements, which are likely about modifying or creating Compute resources. **Why other options are incorrect:** The discussion lacks explicit alternative answer options; however, any answer other than "No, No, No" is deemed incorrect based on the community's analysis of the provided custom role definition and Azure RBAC principles. There is a significant agreement among users that the permissions granted by the custom role are too broad ("*/read") or too narrow ("Microsoft.Compute/*") to enable the actions implied by the (missing) statements.
399
**** [View Question](https://www.examtopics.com/discussions/databricks/view/68074-exam-az-500-topic-2-question-66-discussion/) You plan to deploy an app that will modify the properties of Azure Active Directory (Azure AD) users by using Microsoft Graph. You need to ensure that the app can access Azure AD. What should you configure first? A. an app registration B. an external identity C. a custom role-based access control (RBAC) role D. an Azure AD Application Proxy **
** A. an app registration To allow an application to interact with Azure AD and access resources like user properties through Microsoft Graph, you must first register that application within Azure AD. This app registration creates a service principal, which is the identity that Azure AD uses to represent the application. Without this initial registration, the application won't have the necessary credentials or permissions to authenticate and access Azure AD resources. **Why other options are incorrect:** * **B. an external identity:** External identities are used for users or applications that authenticate from outside your organization's Azure AD tenant. This is not the first step in enabling an application to interact with Azure AD. * **C. a custom role-based access control (RBAC) role:** RBAC roles define permissions within Azure AD, but you need an application registered first before you can assign it a role. You can't assign permissions to something that doesn't exist in Azure AD. * **D. an Azure AD Application Proxy:** Application Proxy is used for publishing on-premises applications to external users through Azure AD. It's unrelated to the initial configuration needed for an application to interact directly with Azure AD. The discussion shows overwhelming agreement on the correct answer.
400
[View Question](https://www.examtopics.com/discussions/databricks/view/68075-exam-az-500-topic-2-question-65-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains a user named Admin1. Admin1 is assigned the Application developer role. You purchase a cloud app named App1 and register App1 in Azure AD. Admin1 reports that the option to enable token encryption for App1 is unavailable. You need to ensure that Admin1 can enable token encryption for App1 in the Azure portal. What should you do? A. Upload a certificate for App1. B. Modify the API permissions of App1. C. Add App1 as an enterprise application. D. Assign Admin1 the Cloud application administrator role.
The correct answer is **A. Upload a certificate for App1.** While the discussion shows disagreement and alternative suggestions (primarily focusing on the distinction between App Registrations and Enterprise Applications, and whether Admin1 needs elevated permissions), the most direct and practical solution to enable token encryption, according to the provided Microsoft documentation linked in the discussion, involves uploading a certificate. The option to enable token encryption in the Azure portal is only directly available for Enterprise Applications created as such from the start, but for other applications, it requires configuring the application manifest to set the `tokenEncryptionKeyId` attribute. Uploading a certificate is the means to achieve that configuration, allowing Admin1 to enable token encryption for App1 in this scenario. WHY OTHER OPTIONS ARE INCORRECT: * **B. Modify the API permissions of App1:** Modifying API permissions doesn't directly enable token encryption. It controls what resources the application can access. * **C. Add App1 as an enterprise application:** While this might seem like a solution, the discussion highlights that this isn't always sufficient. Even if it's registered as an Enterprise Application, the token encryption option is not always directly available in the portal and requires a certificate upload for the configuration. * **D. Assign Admin1 the Cloud application administrator role:** Assigning Admin1 a higher role might provide the permissions to enable token encryption but is not the direct or most efficient solution, especially because the primary issue might be configuring App1, not Admin1's permissions. The primary focus should be on configuring App1 to support token encryption. NOTE: The discussion reveals a significant disagreement on the best approach. Some argue that adding the application as an enterprise app is sufficient, while others highlight the certificate upload method as necessary for App Registrations. The answer provided above leans towards the certificate upload method based on the linked Microsoft documentation and the practical aspects of enabling token encryption.
401
**** [View Question](https://www.examtopics.com/discussions/databricks/view/68078-exam-az-500-topic-2-question-69-discussion/) You have a Microsoft 365 tenant that uses an Azure Active Directory (Azure AD) tenant. The Azure AD tenant syncs to an on-premises Active Directory domain by using an instance of Azure AD Connect. You create a new Azure subscription. You discover that the synced on-premises user accounts cannot be assigned roles in the new subscription. You need to ensure that you can assign Azure and Microsoft 365 roles to the synced Azure AD user accounts. What should you do first? A. Configure the Azure AD tenant used by the new subscription to use pass-through authentication. B. Configure the Azure AD tenant used by the new subscription to use federated authentication. C. Change the Azure AD tenant used by the new subscription. D. Configure a second instance of Azure AD Connect. **
** C. Change the Azure AD tenant used by the new subscription. The problem is that the new Azure subscription is associated with a different Azure AD tenant than the one where the on-premises users are synced. Therefore, those users aren't recognized within the new subscription's context. Changing the Azure AD tenant associated with the new subscription to match the one used by Microsoft 365 and Azure AD Connect will solve the problem and allow for role assignments. **Why other options are incorrect:** * **A. Configure the Azure AD tenant used by the new subscription to use pass-through authentication:** This changes the authentication method but doesn't address the core issue of the subscription being linked to the wrong Azure AD tenant. * **B. Configure the Azure AD tenant used by the new subscription to use federated authentication:** Similar to option A, this only changes the authentication method and doesn't solve the underlying tenant mismatch. * **D. Configure a second instance of Azure AD Connect:** This is unnecessary and won't resolve the problem. The issue is not with the synchronization process itself, but with the association of the new subscription to a different Azure AD tenant. **Note:** While the suggested answer is C, there is some disagreement in the discussion. One user suggests option D, but this is incorrect based on the core issue described. The prevailing consensus among the highly-voted responses points towards option C as the correct solution.
402
[View Question](https://www.examtopics.com/discussions/databricks/view/68081-exam-az-500-topic-11-question-1-discussion/) You plan to configure Azure Disk Encryption for VM4. Which key vault can you use to store the encryption key? A. KeyVault1 B. KeyVault2 C. KeyVault3
A. KeyVault1 Azure Disk Encryption requires the Key Vault and the VM to be in the same Azure region and subscription. The provided discussion states that VM4 is located in the West US region, and KeyVault1 is also located in the West US region. Therefore, KeyVault1 is the only suitable option. KeyVault2 and KeyVault3 are not in the same region as VM4 and therefore cannot be used. Other Options: Options B and C are incorrect because the key vault must reside in the same Azure region as the VM, as confirmed by multiple users in the discussion and Microsoft documentation. The requirement for same region and subscription is a core tenet of Azure Disk Encryption.
403
**** [View Question](https://www.examtopics.com/discussions/databricks/view/68082-exam-az-500-topic-11-question-2-discussion/) You need to encrypt storage1 to meet the technical requirements. Which key vaults can you use? A. KeyVault2 and KeyVault3 only B. KeyVault1 only C. KeyVault1 and KeyVault3 only D. KeyVault1, KeyVault2, and KeyVault3 **
** D. KeyVault1, KeyVault2, and KeyVault3 **Explanation:** The discussion overwhelmingly supports the answer D. Multiple users confirm that a storage account can use keys from a key vault in a different region, subscription, or even tenant (though this was a point of some discussion early on). Therefore, all three Key Vaults (KeyVault1, KeyVault2, and KeyVault3) are valid options for encrypting storage1. **Why other options are incorrect:** * **A. KeyVault2 and KeyVault3 only:** This is incorrect because there's no restriction preventing the use of KeyVault1. * **B. KeyVault1 only:** This is too restrictive; the storage account is not limited to a single key vault. * **C. KeyVault1 and KeyVault3 only:** This incorrectly excludes KeyVault2 as a valid option. **Note:** While the initial comments suggested limitations, later responses and links to Microsoft documentation clarify that the key vault's location (region, subscription, and tenant) is largely irrelevant to its ability to encrypt the storage account, provided they are within the same Azure tenant. The consensus in the updated discussion supports option D.
404
[View Question](https://www.examtopics.com/discussions/databricks/view/68257-exam-az-500-topic-3-question-50-discussion/) You plan to deploy Azure container instances. You have a containerized application that is comprised of two containers: an application container and a validation container. The application container is monitored by the validation container. The validation container performs security checks by making requests to the application container and waiting for responses after every transaction. You need to ensure that the application container and the validation container are scheduled to be deployed together. The containers must communicate to each other only on ports that are not externally exposed. What should you include in the deployment? A. application security groups B. network security groups (NSGs) C. management groups D. container groups
The correct answer is **D. container groups**. Azure Container Instances (ACI) uses *container groups* to deploy multiple containers as a single unit. Containers within the same group share a network, ensuring they can communicate with each other. This directly addresses the requirement of deploying the application and validation containers together and facilitating communication between them. ACI container groups manage the lifecycle and resource allocation of these containers, ensuring they are deployed and run concurrently. The internal communication between containers within the group is not exposed externally, fulfilling the requirement to avoid external exposure of their communication ports. Why other options are incorrect: * **A. application security groups:** These control network traffic between VMs, not containers within a single ACI deployment. * **B. network security groups (NSGs):** While NSGs control network traffic, they are not suitable for managing the deployment and lifecycle of containers within a single ACI deployment and regulating their internal communication. They operate at a higher level of abstraction than the containers. * **C. management groups:** Management groups are used for managing Azure subscriptions and resources at an organizational level, not for deploying and managing individual container groups. There is a general consensus amongst the users in the discussion that the correct answer is D.
405
[View Question](https://www.examtopics.com/discussions/databricks/view/68296-exam-az-500-topic-14-question-1-discussion/) HOTSPOT - You need to configure support for Microsoft Sentinel notebooks to meet the technical requirements. What is the minimum number of Azure container registries and Azure Machine Learning workspaces required? Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0031400001.png) *(Image contains the question text, no other visible text)*
0 Azure Container Registries and 1 Azure Machine Learning workspace. Microsoft Sentinel notebooks primarily run on Azure Machine Learning (AML) workspaces. Therefore, at least one AML workspace is required. The question does not mention any requirement for Azure Container Registries in the context of configuring support for Microsoft Sentinel notebooks; therefore, none are needed. While there's discussion mentioning Azure Synapse Workspace and local environments as alternative options, the question asks for the *minimum* number required, and the primary and most direct method is through an Azure ML workspace. There is some disagreement in the discussion regarding the need for other components, but the consensus among the highly upvoted comments points to the answer given above as correct.
406
**** [View Question](https://www.examtopics.com/discussions/databricks/view/6910-exam-az-500-topic-3-question-46-discussion/) You have a network security group (NSG) bound to an Azure subnet. You run `Get-AzNetworkSecurityRuleConfig` and receive the output shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0028700001.jpg) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0028800001.png) **
** Box 1: **Able to connect to East US 2** The NSG rule `StorageEA2Allow` has `DestinationAddressPrefix {Storage/EastUS2}`, explicitly allowing connections to the storage account in the East US 2 region. Box 2: **Dropped** The NSG rule `Contoso_FTP` allows TCP port 21 traffic from source IP address 1.2.3.4/32 to destination IP address 10.0.0.5/32. The question implies FTP traffic is destined for 10.0.0.10. Since 10.0.0.5/32 refers to only the single IP address 10.0.0.5, and not 10.0.0.10, the FTP traffic to 10.0.0.10 will be dropped because no matching rule exists. **Why other options are incorrect:** The discussion highlights a disagreement about the destination IP address of the FTP traffic. While the image shows a destination of 10.0.0.5, the discussion correctly points out that the question implies FTP traffic is going to 10.0.0.10. Therefore, "Allowed" is incorrect for Box 2 because the implied destination IP address is not covered by the existing rule. The consensus among the discussants is that the traffic will be dropped. There is some discussion of a possible typo in the question, which would impact the answer; however, based on the information provided, the most consistent and technically correct answer is "Dropped".
407
[View Question](https://www.examtopics.com/discussions/databricks/view/6912-exam-az-500-topic-4-question-38-discussion/) You create a new Azure subscription. You need to ensure that you can create custom alert rules in Azure Security Center. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Onboard Azure Active Directory (Azure AD) Identity Protection. B. Create an Azure Storage account. C. Implement Azure Advisor recommendations. D. Create an Azure Log Analytics workspace. E. Upgrade the pricing tier of Security Center to Standard.
The correct answer is **D and E**. * **D. Create an Azure Log Analytics workspace:** Azure Security Center uses Log Analytics to collect and analyze security data. A Log Analytics workspace is required to store the logs necessary for creating and using custom alert rules. Without a Log Analytics workspace, Security Center cannot collect the data to create those rules. * **E. Upgrade the pricing tier of Security Center to Standard:** Custom alert rules are a feature of the *Standard* tier of Azure Security Center. The free tier does not support this functionality. Why other options are incorrect: * **A. Onboard Azure Active Directory (Azure AD) Identity Protection:** While Azure AD Identity Protection is important for overall security, it's not directly required for creating custom alert rules within Azure Security Center. * **B. Create an Azure Storage account:** Azure Storage accounts are used for various purposes, but are not directly involved in the functionality of creating custom alert rules in Azure Security Center. * **C. Implement Azure Advisor recommendations:** Azure Advisor provides general recommendations for optimizing your Azure environment. While these recommendations might indirectly improve security, they are not a prerequisite for creating custom alert rules in Security Center. Note: The discussion shows some disagreement on the correct answer. While the provided answer is widely accepted as correct based on general Azure Security Center functionality, it is important to rely on official Microsoft documentation for definitive confirmation.
408
[View Question](https://www.examtopics.com/discussions/databricks/view/69446-exam-az-500-topic-10-question-1-discussion/) DRAG DROP - You need to perform the planned changes for OU2 and User1. Which tools should you use? To answer, drag the appropriate tools to the correct resources. Each tool may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: (Image shows a drag-and-drop table with "Resources" column listing "OU2" and "User1", and a "Tools" column with blank spaces to drag options into. The image also contains a table with pre-populated information about OUs, Groups and Users including a group named "Group1" described as "cloud only").
The correct tools are Azure AD Connect for OU2 and the Azure portal (or Azure Active Directory within the Azure portal) for User1. OU2, being an on-premises organizational unit, requires Azure AD Connect for synchronization with Azure AD. User1, needing to be added to Group1 (a cloud-only group according to the provided note: "The Azure Active Directory (Azure AD) tenant contains..."), should be managed using a cloud-based tool like the Azure portal or its Azure Active Directory interface. The discussion highlights that Group1's cloud-only nature necessitates a cloud-based management approach. Other options are not specified in the provided text but would likely be incorrect because they wouldn't address the on-premises and cloud-only aspects of the problem correctly. For example, solely using a cloud-based tool for OU2 would be insufficient, as it's an on-premises entity needing synchronization. Note: The discussion shows some minor disagreement on the specifics of which cloud tool (Azure portal, PowerShell, or Graph API) is best for managing User1's membership in Group1. However, all participants agree that a cloud-based management tool is necessary for User1, given that Group1 is entirely cloud-based.
409
[View Question](https://www.examtopics.com/discussions/databricks/view/7009-exam-az-500-topic-2-question-1-discussion/) You have an Azure subscription named Sub1. You have an Azure Storage account named sa1 in a resource group named RG1. Users and applications access the blob service and the file service in sa1 by using several shared access signatures (SASs) and stored access policies. You discover that unauthorized users accessed both the file service and the blob service. You need to revoke all access to sa1. Solution: You create a new stored access policy. Does this meet the goal? A. Yes B. No
B. No Creating a new stored access policy does *not* revoke existing policies or SAS tokens. To revoke access, you must delete the existing stored access policies or modify their expiry time to a past date. The existing SAS tokens granted under the old policies will remain valid unless explicitly revoked through deletion or modification of their associated policies. The discussion shows a clear consensus amongst users that option A is incorrect. While one user provided a contradictory response referencing Microsoft documentation, the overwhelming majority of the discussion supports option B as the correct answer. Therefore, the answer reflects the dominant opinion presented within the discussion.
410
**** [View Question](https://www.examtopics.com/discussions/databricks/view/7011-exam-az-500-topic-3-question-23-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. (Image 1 shows a table with VM names, Resource Groups, and locations). You create the Azure policies shown in the following table. (Image 2 shows a table describing Azure policies, including their names, resource type, and assignments). You create the resource locks shown in the following table. (Image 3 shows a table listing resource locks, including their name, level, state, and scope). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: (Image 4 shows a table with three statements): 1. You can start VM1. 2. You can start VM2. 3. You can create a new VM in RG2. **
** No, No, No. VM1 has a read-only lock applied at the resource level, preventing any modifications, including starting it. VM2 has a read-only lock applied at the resource group level (RG2), preventing any actions on resources within RG2, including starting VM2. Creating a new VM in RG2 is also prevented by the read-only lock on RG2. The discussion strongly supports this answer and multiple users agree. **Why other options are incorrect:** Any answer containing "Yes" for any of the three statements would be incorrect because the provided tables clearly show read-only locks preventing the actions described in each statement. There is no ambiguity in the presented data; the locks explicitly prevent the actions.
411
**** [View Question](https://www.examtopics.com/discussions/databricks/view/7013-exam-az-500-topic-3-question-44-discussion/) From Azure Security Center, you create a custom alert rule. You need to configure which users will receive an email message when the alert is triggered. What should you do? A. From Azure Monitor, create an action group. B. From Security Center, modify the Security policy settings of the Azure subscription. C. From Azure Active Directory (Azure AD), modify the members of the Security Reader role group. D. From Security Center, modify the alert rule. **
** A. From Azure Monitor, create an action group. **Explanation:** Action groups in Azure Monitor are used to define notification preferences for alerts across various Azure services, including Azure Security Center. When an alert is triggered, the action group dictates how and to whom notifications (such as email) are sent. Creating an action group in Azure Monitor is the correct approach to specify which users receive email notifications for a custom alert rule created within Azure Security Center. **Why other options are incorrect:** * **B. From Security Center, modify the Security policy settings of the Azure subscription:** Security policy settings control broader security configurations, not the specific notification recipients for individual alerts. * **C. From Azure Active Directory (Azure AD), modify the members of the Security Reader role group:** Azure AD manages user identities and roles, but doesn't directly handle alert notifications. The Security Reader role provides access to security information, not alert notification configuration. * **D. From Security Center, modify the alert rule:** While you define the alert criteria within the alert rule itself in Security Center, you do *not* specify notification recipients there. The recipients are defined separately via an action group. **Note:** The discussion reveals some disagreement about the question's accuracy and whether the method of configuring alert notifications has changed since the question was written. Some users suggest that newer methods might involve Workflow Automation in Microsoft Defender for Cloud. This answer reflects the most likely and widely accepted approach based on the provided information, but the validity of the question itself might be outdated.
412
**** [View Question](https://www.examtopics.com/discussions/databricks/view/71051-exam-az-500-topic-10-question-2-discussion/) You need to meet the technical requirements for the finance department users. Which CAPolicy1 settings should you modify? A. Cloud apps or actions B. Conditions C. Grant D. Session **
** D. Session The discussion overwhelmingly supports option D as the correct answer. Modifying the "Session" settings within a Conditional Access policy (CAPolicy1) allows control over aspects like sign-in frequency. This directly addresses the need to meet technical requirements for users, as session controls impact how and how often users need to authenticate. Users' technical requirements are most likely related to security and access management, and session controls directly address this by enforcing regular re-authentication or limiting session duration. **Why other options are incorrect:** * **A. Cloud apps or actions:** This setting defines *which* resources the policy applies to, not how frequently a user needs to authenticate or the overall user experience. * **B. Conditions:** This specifies *when* the policy is applied (e.g., based on user location, device type). It doesn't directly control the user's session experience. * **C. Grant:** This determines *what level of access* is granted when the conditions are met. It doesn't directly address the user's technical requirements regarding session management. **Note:** While the discussion shows strong consensus, there's a mention of option C being selected by one user. This highlights the potential for ambiguity in the question or a misunderstanding of the options in real-world exam scenarios.
413
**** [View Question](https://www.examtopics.com/discussions/databricks/view/74361-exam-az-500-topic-2-question-72-discussion/) You have an Azure subscription that contains an Azure SQL database named SQL1. You plan to deploy a web app named App1. You need to provide App1 with read and write access to SQL1. The solution must meet the following requirements: * Provide App1 with access to SQL1 without storing a password. * Use the principle of least privilege. * Minimize administrative effort. Which type of account should App1 use to access SQL1, and which database roles should you assign to App1? **
** App1 should use a **Managed Identity** and be assigned the **db_datareader** and **db_datawriter** database roles. **Explanation:** A Managed Identity allows the web app to authenticate to SQL Database without needing to store or manage credentials directly within the application. This addresses the requirement of not storing passwords. Assigning `db_datareader` and `db_datawriter` roles grants the application only the necessary permissions (read and write), adhering to the principle of least privilege. This also minimizes administrative effort because there's no need to manually manage connection strings or credentials. **Why other options are incorrect:** The provided discussion and suggested answer do not offer alternative options, so there are no other options to discuss. The solution presented is the best practice based on the stated requirements.
414
[View Question](https://www.examtopics.com/discussions/databricks/view/74415-exam-az-500-topic-3-question-7-discussion/) You need to grant the required permissions to a user named User2-1234578 to manage the virtual networks in the RG1lod1234578 resource group. The solution must use the principle of least privilege. To complete this task, sign in to the Azure portal. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0023300001.jpg)
The suggested answer incorrectly uses the "Virtual Machine Contributor" role. While the image shows this role being selected, this role does *not* provide the necessary permissions to manage virtual networks. The principle of least privilege requires assigning only the necessary permissions. To manage virtual networks within a resource group, the user should be assigned the "Network Contributor" role. The correct solution involves these steps: 1. In the Azure portal, locate and select the RG1lod1234578 resource group. 2. Click Access control (IAM). 3. Click the Role assignments tab. 4. Click Add > Add role assignment. 5. In the Role dropdown, select "Network Contributor". 6. In the Select list, select user User2-1234578. 7. Click Save. The "Network Contributor" role grants the necessary permissions to manage virtual networks without granting excessive privileges, adhering to the principle of least privilege. Why other options are incorrect: The suggested answer uses the "Virtual Machine Contributor" role, which is insufficient for managing virtual networks. It only allows management of virtual machines themselves, not the underlying network infrastructure. Other roles with broader permissions would violate the principle of least privilege. There is no discussion of other options, but any other role broader than "Network Contributor" is wrong in the context of this question.
415
**** [View Question](https://www.examtopics.com/discussions/databricks/view/74480-exam-az-500-topic-3-question-61-discussion/) You have an Azure subscription that contains the following resources: A virtual network named VNET1 that contains two subnets named Subnet1 and Subnet2. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0030400003.png) ✑ A virtual machine named VM1 that has only a private IP address and connects to Subnet1. You need to ensure that Remote Desktop connections can be established to VM1 from the internet. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange then in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0030500002.png) **
** The correct sequence of actions is: 1. **Create a new subnet:** A dedicated subnet is needed for the Azure Firewall. Using Subnet2 is not explicitly ruled out in the original question, but creating a new subnet (e.g., "AzureFirewallSubnet") is the best practice and is supported by the discussion. 2. **Deploy Azure Firewall:** The Azure Firewall needs to be deployed within the newly created subnet. This provides the necessary network security and routing capabilities. 3. **Create a NAT rule collection:** This allows inbound connections (RDP in this case) to reach VM1 from the public internet, securely translating the public IP address to the private IP address of VM1. **Why other options are incorrect (or less optimal):** The discussion highlights some confusion around using Subnet2. While the original question doesn't explicitly forbid using Subnet2, creating a new subnet is generally recommended for security best practices. Deploying the firewall *before* creating the subnet wouldn't work. The order is crucial; you need the subnet before you can deploy the firewall into it, and you need the firewall before you configure NAT rules to allow RDP access. **Note:** The discussion shows some disagreement on whether to use an existing subnet (Subnet2) or create a new one. The above answer reflects the generally accepted best practice and the most common solution suggested in the discussion – creating a new subnet for the Azure Firewall.
416
[View Question](https://www.examtopics.com/discussions/databricks/view/74546-exam-az-500-topic-4-question-79-discussion/) You are troubleshooting a security issue for an Azure Storage account. You enable Azure Storage Analytics logs and archive it to a storage account. What should you use to retrieve the diagnostics logs? A. Azure Monitor B. SQL query editor in Azure C. File Explorer in Windows D. Azure Storage Explorer
D. Azure Storage Explorer Azure Storage Explorer is a dedicated tool designed for managing and interacting with Azure Storage data. Because the diagnostic logs are archived to an Azure Storage account, Azure Storage Explorer is the most appropriate tool to retrieve them. It provides a user-friendly interface for browsing, downloading, and managing files within the storage account. Why other options are incorrect: * **A. Azure Monitor:** Azure Monitor primarily focuses on monitoring and managing Azure resources and services. While it can collect certain logs, it's not the direct mechanism for accessing files stored within a storage account. * **B. SQL query editor in Azure:** This is used for querying databases, not for accessing files stored in blob storage. * **C. File Explorer in Windows:** File Explorer is a local file system tool. It cannot directly access files stored in an Azure storage account. Note: While the discussion shows overwhelming agreement on the correct answer (D), there's a mention of AzCopy as a possible alternative, which is a command-line utility for transferring data to and from Azure Blob Storage. However, based purely on the question's wording and the provided context, Azure Storage Explorer (D) is the most directly applicable and user-friendly solution.
417
[View Question](https://www.examtopics.com/discussions/databricks/view/74550-exam-az-500-topic-2-question-74-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0018200001.png) You plan to deploy the virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0018300001.png) You need to assign managed identities to the virtual machines. The solution must meet the following requirements: ✑ Assign each virtual machine the required roles. ✑ Use the principle of least privilege. What is the minimum number of managed identities required? A. 1 B. 2 C. 3 D. 4
The correct answer is B. Two managed identities are the minimum required. To adhere to the principle of least privilege and assign only the necessary roles, separate managed identities should be used for different sets of permissions. VM1 and VM2 require access to Key Vault 1, while VM3 and VM4 require access to Key Vault 2. Therefore, one managed identity can be assigned to VM1 and VM2 with access to Key Vault 1, and a separate managed identity can be assigned to VM3 and VM4 with access to Key Vault 2. Using two separate user-assigned managed identities ensures that VMs only have the permissions they need, fulfilling the principle of least privilege. Why other options are incorrect: * **A. 1:** This is incorrect because it violates the principle of least privilege. Assigning a single managed identity to all four VMs would grant unnecessary access to Key Vaults to VMs that do not require them. * **C. 3:** This is unnecessary. Two managed identities are sufficient to cover all access requirements, one for each set of VMs with access to a specific Key Vault. * **D. 4:** This is also unnecessary. Each VM does not need its own individual managed identity. The requirement is to grant only the necessary access for each VM, and this can be achieved by using two managed identities grouping VMs based on their required access to Key Vaults. Note: There is some disagreement in the discussion regarding whether system-assigned or user-assigned managed identities should be used. The above answer assumes user-assigned managed identities as it aligns better with the principle of least privilege, allowing for more granular control over permissions.
418
**** [View Question](https://www.examtopics.com/discussions/databricks/view/74688-exam-az-500-topic-2-question-77-discussion/) You have an Azure subscription that contains a resource group named RG1. RG1 contains a storage account named storage1. You have two custom Azure roles named Role1 and Role2 that are scoped to RG1. The permissions for Role1 are shown in the following JSON code. [Image of JSON code showing permissions for Role1]. The permissions for Role2 are shown in the following JSON code. [Image of JSON code showing permissions for Role2]. You assign the roles to the users shown in the following table. [Image of table showing user assignments]. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. [Image showing the statements: User1 can read data from storage1. User2 can read data from storage1. User3 can perform Azure backups from storage1.] **
** The correct answer is Yes, Yes, Yes. However, there is significant disagreement in the discussion regarding the correct answer, particularly concerning User 1 and User 2's ability to read data. * **User 1 (Can read data from storage1): Yes.** While Role1 only explicitly grants `Microsoft.Storage/storageAccounts/read`, this permission, in conjunction with the ability to get keys and SAS tokens (implied or explicitly granted), allows access to the data. The discussion highlights confusion on this point, with some believing `read` only allows listing, not accessing data. However, with key or SAS access, the `read` action enables data retrieval. * **User 2 (Can read data from storage1): Yes.** Role2 grants `Microsoft.Storage/storageAccounts/*`, providing full access, including reading data. The initial discussion incorrectly asserted a lack of data actions, but this is inaccurate due to the wildcard `*`. * **User 3 (Can perform Azure backups from storage1): Yes.** User 3 is assigned Role2 which has permissions on `Microsoft.Storage/storageAccounts/*` and access to data, which is sufficient for backups. Although some discussion mentions `Microsoft.RecoveryServices/` actions are seemingly absent, the comprehensive access provided by `Microsoft.Storage/storageAccounts/*` supersedes that need. The access given is broad enough to perform backups. **Why other options are incorrect (based on the provided information and discussion):** The discussion shows conflicting opinions, particularly surrounding whether `Microsoft.Storage/storageAccounts/read` grants data-reading capabilities without further actions (like retrieving keys or SAS). However, considering the ability to use keys and SAS tokens, and the all-encompassing permission of `Microsoft.Storage/storageAccounts/*`, the "Yes, Yes, Yes" answer holds the most weight considering the available information. Further clarification on exact key and SAS token permissions within the roles might alter this.
419
**** [View Question](https://www.examtopics.com/discussions/databricks/view/74690-exam-az-500-topic-4-question-1-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0033100001.png) *(Image contains a table showing SQL1 in RG3, SQL2 planned in RG2, SQL3 planned in RG1)* Transparent Data Encryption (TDE) is disabled on SQL1. You assign policies to the resource groups as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0033100002.png) *(Image shows RG1 and RG2 with Policy 1 (Deny) and RG3 with Policy 2 (DeployIfNotExists for TDE))* You plan to deploy Azure SQL databases by using an Azure Resource Manager (ARM) template. The databases will be configured as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0033200001.png) *(Image shows SQL2 and SQL3 planned for deployment with no TDE, and shows SQL1 is already existing)* For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0033200002.png) *(Image shows three statements: 1. TDE will be enabled on SQL1 automatically. 2. SQL2 will be deployed. 3. SQL3 will be deployed.)* **
** No, No, No. **Explanation:** * **Statement 1 (TDE will be enabled on SQL1 automatically): No.** SQL1 already exists and TDE is disabled. Policy 2 (DeployIfNotExists) in RG3 only applies to *new* resources. It will not automatically enable TDE on an existing resource. A remediation task would be needed. * **Statement 2 (SQL2 will be deployed): No.** SQL2 is planned for deployment in RG2, which has Policy 1 (Deny). Deny policies prevent the deployment of resources. * **Statement 3 (SQL3 will be deployed): No.** SQL3 is planned for deployment in RG1, which also has Policy 1 (Deny), preventing its deployment. The discussion shows some disagreement on the precise order of policy evaluation; however, the consensus is that "Deny" policies override other policy types, making the deployment of SQL2 and SQL3 impossible. The "DeployIfNotExists" policy will not retroactively apply to an already existing resource like SQL1. **Why other options are incorrect:** Any answer including "Yes" for statements 2 or 3 is incorrect because the "Deny" policy prevents deployment. A "Yes" for statement 1 is incorrect because "DeployIfNotExists" only applies to new resources, not existing ones.
420
**** [View Question](https://www.examtopics.com/discussions/databricks/view/74863-exam-az-500-topic-4-question-77-discussion/) You are troubleshooting a security issue for an Azure Storage account. You enable Azure Storage Analytics logs and archive it to a storage account. What should you use to retrieve the diagnostics logs? A. Azure Cosmos DB explorer B. SQL query editor in Azure C. AzCopy D. the Security admin center **
** C. AzCopy AzCopy is a command-line utility designed for transferring data to and from Azure Storage accounts. Since the Azure Storage Analytics logs are archived to a storage account, AzCopy is the appropriate tool to retrieve them. **Why other options are incorrect:** * **A. Azure Cosmos DB explorer:** This tool is used to manage and interact with Azure Cosmos DB databases, not Azure Storage accounts. * **B. SQL query editor in Azure:** While Azure offers SQL-based query tools for some services, it's not the standard method for retrieving data directly from Azure Blob storage where the logs are likely stored. * **D. the Security admin center:** The Security Center provides security-related information and management, but it's not directly involved in retrieving the contents of storage blobs. **Note:** The discussion shows some disagreement on the best tool, with suggestions including Azure Storage Explorer and CLI commands in addition to AzCopy. While AzCopy is the suggested answer and a valid solution, other methods mentioned in the discussion could also work depending on the user's preference and setup.
421
**** [View Question](https://www.examtopics.com/discussions/databricks/view/74889-exam-az-500-topic-5-question-1-discussion/) You have an Azure subscription that contains an Azure SQL database named SQL1 and an Azure key vault named KeyVault1. KeyVault1 stores the keys shown in the following table. | Key Name | Key Type | Key Size | |---|---|---| | Key1 | RSA | 2048 | | Key2 | RSA-HSM | 2048 | | Key3 | EC | 256 | | Key4 | RSA | 4096 | You need to configure Transparent Data Encryption (TDE). TDE will use a customer-managed key for SQL1. Which keys can you use? A. Key2 only B. Key1 only C. Key2 and Key3 only D. Key1, Key2, Key3, and Key4 E. Key1 and Key2 only **
** E. Key1 and Key2 only **Explanation:** According to the provided Microsoft documentation links within the discussion, TDE protectors must be asymmetric RSA or RSA-HSM keys with key sizes of 2048 or 3072 bits. Key1 (RSA, 2048) and Key2 (RSA-HSM, 2048) meet these criteria. Key3 is an EC key and Key4 is an RSA key with an unsupported size (4096 bits). Therefore, only Key1 and Key2 are valid choices for configuring TDE with a customer-managed key. **Why other options are incorrect:** * **A. Key2 only:** While Key2 is valid, Key1 is also a valid option. * **B. Key1 only:** While Key1 is valid, Key2 is also a valid option. * **C. Key2 and Key3 only:** Key3 is an EC key, which is not supported for TDE. * **D. Key1, Key2, Key3, and Key4:** Key3 and Key4 do not meet the requirements for TDE. **Note:** The discussion shows a consensus among users that the correct answer is E. There is a mention of testing in a lab environment where EC and 4096-bit keys were greyed out, corroborating the answer and the documentation.
422
**** [View Question](https://www.examtopics.com/discussions/databricks/view/75013-exam-az-500-topic-2-question-73-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains two users named User1 and User2 and a registered app named App1. You create an app-specific role named Role1. You need to assign Role1 to User1 and enable User2 to request access to App1. Which two settings should you modify? To answer, select the appropriate settings in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0017900001.png) **
** The correct settings to modify are: 1. **Users and groups:** Assign User1 to Role1 under the App1's "Users and groups" settings. This directly assigns the custom role to the user. 2. **Self-service:** Enable the "Allow users to request access to this application" option for App1 under its self-service settings. This allows User2 to request access to App1. **Why other options are incorrect (based on the discussion):** The suggested answer in the original post ("Roles and administrators" and "Self-service") is disputed. The discussion indicates that "Roles and administrators" might be limited to built-in roles, and not custom app roles. Therefore, "Users and groups" is the more accurate selection for assigning the custom role Role1 to User1. **Note:** There is a clear disagreement in the discussion regarding the correct answer. While the suggested answer is marked as correct, the highly upvoted comment strongly suggests the "Users and groups" and "Self-service" approach is the correct solution based on practical experience.
423
[View Question](https://www.examtopics.com/discussions/databricks/view/75019-exam-az-500-topic-4-question-78-discussion/) You have an Azure Sentinel workspace. You need to create a playbook. Which two triggers will start the playbook? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. An Azure Sentinel scheduled query rule is executed. B. An Azure Sentinel data connector is added. C. An Azure Sentinel alert is generated. D. An Azure Sentinel hunting query result is returned. E. An Azure Sentinel incident is created.
C and E. Azure Sentinel playbooks can be triggered by the generation of an Azure Sentinel alert (C) and the creation of an Azure Sentinel incident (E). These are common events that often necessitate automated responses handled by playbooks. Options A, B, and D are incorrect because: * **A (Scheduled query rule execution):** While scheduled query rules are valuable for monitoring, they don't directly trigger playbooks. They might generate alerts (C), which *would* then trigger a playbook. * **B (Data connector addition):** Adding a data connector is a configuration change, not an event that would automatically initiate a playbook's execution. * **D (Hunting query result):** A hunting query result, by itself, doesn't automatically trigger a playbook. The analyst would need to take further action based on the results, potentially creating an alert or incident that *could* then trigger a playbook. Note: The provided discussion heavily supports the answer CE, with multiple users indicating this as the correct solution. There's no evidence of conflicting opinions within the given discussion.
424
**** [View Question](https://www.examtopics.com/discussions/databricks/view/75077-exam-az-500-topic-3-question-8-discussion/) SIMULATION - You need to ensure that only devices connected to a 131.107.0.0/16 subnet can access data in the rg1lod1234578 Azure Storage account. To complete this task, sign in to the Azure portal. **
** To restrict access to the rg1lod1234578 Azure Storage account to only devices on the 131.107.0.0/16 subnet, follow these steps in the Azure portal: 1. **Navigate to the Storage Account:** Locate and select the rg1lod1234578 storage account. 2. **Access Firewalls and Virtual Networks:** Go to "Security + networking" then select "Networking," and then click on "Firewalls and virtual networks." 3. **Configure Network Access:** Choose "Selected networks" under "Public network access" to deny access by default. 4. **Add the Subnet:** Under "Virtual networks," click "Add existing virtual network." Enter the subnet 131.107.0.0/16 and click "Add." This will create a network rule allowing only traffic from this subnet. 5. **Save Changes:** Click "Save" to apply the configuration. **Explanation:** This approach uses Azure Storage's network security features to restrict access based on network rules. By allowing access only from "Selected networks," and specifically adding the 131.107.0.0/16 subnet, only devices within that subnet can access the storage account. **Why other options are incorrect:** The discussion highlights some disagreement on whether the configuration should be done under the "Firewall" section or the "Virtual Networks" section within "Firewalls and virtual networks." While the exact location might vary slightly depending on the Azure portal version, the core concept remains the same: you need to configure network rules to restrict access to the specified subnet. The approach described above is the most commonly accepted and effective method. The concern raised by Viggy1212 regarding the public address space (131.107.0.0/16) being added under VNet highlights a potential misunderstanding of how this mechanism works. Adding a public IP range under VNET-specific rules essentially blocks all traffic *except* from the specified range. Therefore, the approach outlined in the answer remains correct.
425
**** [View Question](https://www.examtopics.com/discussions/databricks/view/75823-exam-az-500-topic-2-question-6-discussion/) DRAG DROP - You are implementing conditional access policies. You must evaluate the existing Azure Active Directory (Azure AD) risk events and risk levels to configure and implement the policies. You need to identify the risk level of the following risk events: ✑ Users with leaked credentials ✑ Impossible travel to atypical locations ✑ Sign-ins from IP addresses with suspicious activity Which level should you identify for each risk event? To answer, drag the appropriate levels to the correct risk events. Each level may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0005900001.jpg) **
** The provided question is outdated and the suggested answer is unreliable due to changes in Azure AD since the question was created. Multiple commenters in the discussion thread explicitly state that the question and associated answer are no longer valid. The referenced external links are also outdated and do not reflect current Azure AD risk levels. Therefore, a definitive answer cannot be provided based solely on the given materials. There is strong consensus in the discussion that the question is invalid. **Why other options are incorrect (or why no definitive answer can be given):** The suggested answer in the original post and other answers in the discussion are based on outdated information and are not reliable for determining current risk levels in Azure AD. The discussion clearly indicates that Microsoft has changed how risk events are categorized, rendering any specific risk level assignment inaccurate.
426
**** [View Question](https://www.examtopics.com/discussions/databricks/view/7673-exam-az-500-topic-2-question-11-discussion/) Your company has two offices in Seattle and New York. Each office connects to the Internet by using a NAT device. The offices use the IP addresses shown in the following table. (Image shows Seattle Office: 192.168.1.100/24, New York Office: 192.168.2.100/24) The company has an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant contains the users shown in the following table. (Image shows User: Britta, MFA Status: Enabled; User: John, MFA Status: Enabled) The MFA service settings are configured as shown in the exhibit. (Click the Exhibit tab.) (Image shows MFA settings including methods: Phone call, Text message, Microsoft Authenticator app) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Statement 1: User Britta is required to perform multi-factor authentication (MFA) to sign in. Statement 2: User John is required to perform multi-factor authentication (MFA) to sign in. Statement 3: The New York office is excluded from the requirement to perform MFA. **
** Yes, No, No * **Statement 1: Yes.** Britta is enabled for MFA, and while the initial login might not trigger MFA (as some commenters suggest), the settings clearly indicate MFA is enabled for her account. The question does not specify a first login scenario, and the principle of zero trust would mandate MFA. There is some disagreement in the discussion regarding whether this refers to the first login or subsequent logins. * **Statement 2: No.** Similar to Statement 1, John's account has MFA enabled, meaning that *subsequent* logins will require MFA. However, the *initial* login might not require it, as the user would need to register their authentication method first. The ambiguity and conflicting interpretations in the discussion are noted here. * **Statement 3: No.** The provided IP address for the New York office (192.168.2.100/24) is a private IP address. The discussion highlights that for cloud-based Azure MFA, only *public* IP addresses can be used for exemptions. Therefore, this private IP range would not be excluded from MFA requirements. The discussion shows conflicting opinions primarily about the interpretation of "initial login" versus "subsequent logins" and whether the first login would trigger MFA regardless of the MFA settings being enabled for the users. The answer provided here leans toward a stricter interpretation of the settings and the principle of zero trust.
427
[View Question](https://www.examtopics.com/discussions/databricks/view/78631-exam-az-500-topic-1-question-18-discussion/) Your company uses Azure Active Directory (Azure AD) in a hybrid configuration. All users utilize hybrid Azure AD joined Windows 10 computers. You manage an Azure SQL database that allows Azure AD authentication. You need to ensure database developers can connect to the SQL database via Microsoft SQL Server Management Studio (SSMS) using their on-premises Active Directory accounts, minimizing authentication prompts. Which authentication method should the developers use? A. Azure AD token B. Azure Multi-Factor authentication C. Active Directory integrated authentication
C. Active Directory integrated authentication Active Directory Integrated Authentication is the correct answer because it seamlessly integrates with on-premises Active Directory. This minimizes authentication prompts for users already authenticated on their domain-joined Windows 10 machines, aligning with the requirement to keep prompts to a minimum. The hybrid Azure AD joined nature of the computers facilitates this seamless integration. Why other options are incorrect: * **A. Azure AD token:** Azure AD tokens are primarily used for cloud-based authentication and are not the optimal solution for accessing an Azure SQL database using on-premises Active Directory credentials in a hybrid environment. * **B. Azure Multi-Factor Authentication:** While enhancing security, MFA adds extra authentication steps, directly contradicting the requirement to minimize authentication prompts. Note: The discussion shows unanimous agreement on the correct answer.
428
[View Question](https://www.examtopics.com/discussions/databricks/view/79615-exam-az-500-topic-2-question-78-discussion/) You have an Azure subscription that contains a storage account named storage1 and two web apps named app1 and app2. Both apps will write data to storage1. You need to ensure that each app can read only the data that it has written. What should you do? A. Provide each app with a system-assigned identity and configure storage1 to use Azure AD User account authentication. B. Provide each app with a separate Storage account key and configure the app to send the key with each request. C. Provide each app with a user-managed identity and configure storage1 to use Azure AD User account authentication. D. Provide each app with a unique Base64-encoded AES-256 encryption key and configure the app to send the key with each request.
The suggested answer is D, and the discussion highlights a potential overlap with option C. Option D, providing each app with a unique Base64-encoded AES-256 encryption key, ensures that only the app possessing the correct key can decrypt and read the data it wrote. This directly addresses the requirement of each app accessing only its own data. Encryption at rest, combined with appropriate authentication mechanisms, provides robust security. While option C (user-managed identities with Azure AD authentication) offers strong authentication, it doesn't inherently restrict access to only the data written by a specific app. Azure AD authentication verifies the identity of the app, granting it access to the storage account, but it doesn't inherently control which data within the storage account the app can access. Encryption (Option D) adds the necessary layer of data access restriction. Option A (system-assigned identities) is similar to option C in that it doesn't directly solve the problem of restricting data access to only the data written by each app. Option B (separate storage account keys) is considered less secure than managed identities and encryption because storing and managing keys within the application itself presents a security risk. Note: The discussion reveals some disagreement on the best answer. While option D is presented as the suggested answer, option C is also argued as essential for secure authentication, with option D adding the necessary data access restriction layer. The best solution may involve a combination of both.
429
**** [View Question](https://www.examtopics.com/discussions/databricks/view/80613-exam-az-500-topic-5-question-50-discussion/) You have an Azure subscription that contains an Azure SQL database named DB1 in the East US Azure region. You create the storage accounts shown in the following table. | Storage Account Name | Location | Account Type | Account Kind | |---|---|---|---| | storage1 | East US | Standard | General-purpose v2 | | storage2 | East US | Premium | Block Blob | | storage3 | East US | Premium | File Share | | storage4 | East US 2 | Standard | General-purpose v2 | You plan to enable auditing for DB1. Which storage accounts can you use as the auditing destination for DB1? A. storage1 and storage4 only B. storage1 only C. storage1, storage2, storage3, and storage4 D. storage1, storage2, and storage3 only E. storage2 and storage3 only **
** D. storage1, storage2, and storage3 only **Explanation:** The correct answer is D because: * **storage1:** This is a General-purpose v2 storage account in the East US region (same as DB1), which is a supported type for Azure SQL Database auditing. * **storage2:** This is a Premium Block Blob storage account in the East US region (same as DB1), which is also a supported type. * **storage3:** While a Premium File Share account, it's located in the correct region (East US). However, the discussion highlights some disagreement on whether this option is valid. The provided text indicates that file shares are NOT supported. However, option D includes this account. In the context of the provided material *and the provided suggested answer*, storage3 is considered valid. * **storage4:** This is in the East US 2 region, which is different from DB1's region (East US). Azure SQL Database auditing requires the storage account to be in the same region as the database. **Why other options are incorrect:** * **A:** storage4 is in a different region. * **B:** storage2 and storage3 are also valid options within the same region. * **C:** storage4 is in the wrong region. * **E:** storage1 is also a valid option. **Note:** There is some disagreement in the discussion regarding the suitability of Premium File Share storage (storage3) for auditing. While the suggested answer and some comments indicate it is not suitable, option D includes it. Therefore, the answer provided reflects the suggested answer and the provided table and discussion. A more definitive answer would require clarification on whether Premium File Share accounts are supported for Azure SQL Database auditing.
430
**** [View Question](https://www.examtopics.com/discussions/databricks/view/81269-exam-az-500-topic-4-question-82-discussion/) You have the Azure resources shown in the following table. (Image depicts a table with VMs and their Resource Groups and Subscriptions. No text from the image is directly relevant to answering the question beyond identifying VMs, Resource Groups, and Subscriptions). You need to meet the following requirements: ✑ Internet-facing virtual machines must be protected by using network security groups (NSGs). ✑ All the virtual machines must have disk encryption enabled. What is the minimum number of security policies that you should create in Microsoft Defender for Cloud? A. 1 B. 2 C. 3 D. 4 **
** B. 2 The question requires two distinct security policies in Microsoft Defender for Cloud to meet the stated requirements. One policy addresses disk encryption for *all* VMs, while a separate policy addresses NSG protection specifically for internet-facing VMs. While it's possible to combine some aspects of security within a single policy in Defender for Cloud (as noted in the discussion), the question asks for the *minimum number* to satisfy *both* given requirements. Creating two separate policies provides better control and maintainability, particularly if the requirements later change. **Why other options are incorrect:** * **A. 1:** This is incorrect because a single policy cannot simultaneously enforce both disk encryption (which applies to all VMs) and NSG protection (which is only for internet-facing VMs). Attempting to do so would require complex and less manageable rules within a single policy. * **C. 3:** This is unnecessarily complex and more than the minimum required. * **D. 4:** This is also unnecessarily complex and more than the minimum required. **Note:** The discussion shows disagreement about the correct answer, with one user suggesting a single policy could suffice due to Defender for Cloud's flexibility. However, to meet the *explicit* and *separate* requirements as stated, two policies are the minimum approach for clarity and management.
431
**** [View Question](https://www.examtopics.com/discussions/databricks/view/81271-exam-az-500-topic-4-question-83-discussion/) You have an Azure subscription that contains an Azure key vault. The role assignments for the key vault are shown in the following exhibit. (Image of Key Vault role assignments is referenced but not included here). Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area: (Image of a table with two dropdowns labeled "Box 1" and "Box 2" is referenced but not included here). **
** The provided text gives the following information for each user: * **User1 (Owner):** Has full access to the control plane (managing RBAC permissions) but not necessarily the data plane (accessing keys and secrets) at the Key Vault level. Their subscription-level ownership does not automatically grant data plane access to a specific key vault. * **User2 (Key Vault Crypto Officer):** Can perform any action on the keys of a key vault, except manage permissions. * **User3 (Key Vault Secrets Officer):** Can perform any action on the secrets of a key vault, except manage permissions. * **User4 (Key Vault Administrator):** The scope is limited to "../KeyVault1/Keys/Key1". This means they only have administrative access to a *specific key* (Key1) within KeyVault1, not the entire Key Vault. Therefore: * **Box 1:** Only User1, User 2, and User4 can access or manage keys. User 1 can manage permissions but not directly access keys without additional permissions. User 2 can manage keys (excluding permissions). User 4 has limited access to Key1. * **Box 2:** Only User1, User3, and User 4 can access or manage secrets. User 1 can manage permissions for secret access, but not directly access secrets. User 3 has access to secrets. User 4 does *not* have access to secrets as their scope is limited to a specific key. **Note:** The discussion highlights some disagreement regarding User 1's access. While User 1 has owner permissions at the subscription level, access to the Key Vault's data plane is not automatically granted. The answer provided assumes that data plane access to the Key Vault requires explicit role assignments within the Key Vault itself. **Why other options are incorrect:** The answer choices are not provided in the text, so we can only deduce the correct answer based on the roles and their documented permissions and limitations. Any other combination would incorrectly reflect the specific access limitations for each role and scope specified.
432
**** [View Question](https://www.examtopics.com/discussions/databricks/view/81368-exam-az-500-topic-5-question-2-discussion/) You plan to use Azure Disk Encryption for several virtual machine disks. You need to ensure that Azure Disk Encryption can retrieve secrets from the KeyVault12345678 Azure key vault. To complete this task, sign in to the Azure portal and modify the Azure resources. What steps should you take? The image below shows a partial screenshot of the Azure portal. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0046000001.png) **
** To enable Azure Disk Encryption to retrieve secrets from KeyVault12345678, follow these steps: 1. In the Azure portal, navigate to Key Vault 12345678. 2. Go to the "Settings" section and then select "Access Configuration" (Note: there is some disagreement in the discussion about whether it is "Access Policies" or "Access Configuration". Both terms appear in user responses). 3. Locate and select the "Azure Disk Encryption for volume encryption" option. This may require clicking on a "Click to show advanced access policies" button, as indicated in user responses. Additionally, you may need to also select "Enable access to Azure Virtual Machines for deployment" and "Enable Access to Azure Resource Manager for template deployment," depending on your needs. 4. Click "Save" or "Apply" (There is disagreement in the discussion regarding whether the button is labeled "Save" or "Apply") to save your changes. **Explanation:** This process grants the Azure Disk Encryption service the necessary permissions to access secrets stored within the specified Key Vault. The specific path and button labels may vary slightly depending on the Azure portal version. **Why other options are incorrect (or partially incorrect):** The initial suggested answer incorrectly identifies the access configuration as "Access Policies". The discussion reveals this to be outdated or potentially version-dependent. While both terms and paths are mentioned in various user responses, "Access Configuration" appears to be the more current and widely accepted term. There is also some confusion on whether to click "Save" or "Apply" to confirm the changes.
433
[View Question](https://www.examtopics.com/discussions/databricks/view/82113-exam-az-500-topic-5-question-43-discussion/) You need to ensure that when administrators deploy resources by using an Azure Resource Manager template, the deployment can access secrets in an Azure key vault named KV12345678. To complete this task, sign in to the Azure portal. How should you configure Azure Key Vault to allow Azure Resource Manager template deployments to access secrets within it?
To allow Azure Resource Manager template deployments to access secrets in Azure Key Vault KV12345678, you must enable access for Azure Resource Manager within the Key Vault's access policies. This involves navigating to the Key Vault in the Azure portal, then going to its settings (or properties) and finding the "Access policies" or "Access Configuration" section. Within this section, locate and enable the option specifically for "Enable access to Azure Resource Manager for template deployment" or a similarly worded checkbox. Save the changes. Several users in the discussion provided slightly different methods, but the core solution remains consistent: enabling a specific setting within the Key Vault's access policy to grant the Azure Resource Manager service the necessary permissions. `mrt007`'s suggestion of adding a specific access policy with a "Secret Management template" and specifying the Azure Resource Manager service principal is an alternative approach that could also work but is not the most direct method presented in the suggested answer. The suggested answer focuses on a simpler, more direct method.
434
**** [View Question](https://www.examtopics.com/discussions/databricks/view/82156-exam-az-500-topic-4-question-55-discussion/) SIMULATION - You need to ensure that the events in the NetworkSecurityGroupRuleCounter log of the VNET01-Subnet0-NSG network security group (NSG) are stored in the logs1234578 Azure Storage account for 30 days. To complete this task, sign in to the Azure portal. **
** To ensure events from the NetworkSecurityGroupRuleCounter log of the VNET01-Subnet0-NSG network security group are stored in the logs1234578 Azure Storage account for 30 days, follow these steps: 1. **Access the NSG:** In the Azure portal, locate the VNET01-Subnet0-NSG network security group. This can be done by searching for "Network Security Groups" and selecting the relevant NSG or navigating to it through the left-hand navigation pane. 2. **Configure Diagnostic Settings:** Navigate to the NSG's properties and then select "Diagnostic settings" (Note: some users report this is now under "Monitoring"). 3. **Add a Diagnostic Setting:** Click "Add diagnostic setting" or equivalent. 4. **Specify Settings:** * Give the diagnostic setting a name (the name is not crucial). * Select "NetworkSecurityGroupRuleCounter" under the "Log" section. * Choose "Archive to a storage account" as the destination. * Select "logs1234578" as the storage account. * Set "Retention (days)" to 30. 5. **Save:** Click "Save" to apply the changes. **Note:** While the provided solution is generally correct and confirmed by multiple users, there's a disagreement on the retention policy. Some users suggest that configuring retention should now be done using Azure Storage Lifecycle Management rules, instead of directly within the diagnostic settings. The most up-to-date approach may involve both methods, using diagnostic settings to send the logs and lifecycle management rules to control storage retention. **Why other options are incorrect (implicitly):** The discussion doesn't offer alternative solutions, but the implicit incorrect options would be any method that doesn't correctly configure the diagnostic settings to send the specified logs to the designated storage account with the specified retention period, or relies solely on obsolete methods without incorporating storage lifecycle management.
435
**** [View Question](https://www.examtopics.com/discussions/databricks/view/82298-exam-az-500-topic-5-question-48-discussion/) You have an Azure subscription that contains a Microsoft SQL server named Server1 and an Azure key vault named vault1. Server1 hosts a database named DB1. Vault1 contains an encryption key named key1. You need to ensure that you can enable Transparent Data Encryption (TDE) on DB1 by using key1. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. **Select and Place:** (The image shows a drag-and-drop interface with the following options, though their exact order isn't preserved here): * Create a managed identity for Server1. * Configure permissions for Server1. * Add key1 to Server1. * Configure the TDE protector on Server1. * Grant the managed identity access to key1. * Assign an Azure Active Directory (Azure AD) identity to your server. * Grant Key Vault permissions to your server. * Add the Key Vault key to the server and set the TDE Protector. **
** The correct sequence of actions is: 1. **Assign an Azure Active Directory (Azure AD) identity to your server (Server1).** This gives the SQL server an identity to authenticate with Azure Key Vault. 2. **Grant the managed identity access to key1.** This allows the managed identity of Server1 to access the encryption key in vault1. 3. **Grant Key Vault permissions to your server (Server1).** This step grants the necessary permissions for Server1's managed identity to perform actions with key1 (like retrieving it). This step is often combined with step 2 in practice. 4. **Configure the TDE protector on Server1.** This sets the TDE protector to use the specified key (key1) for encrypting the database (DB1). **Explanation:** The process involves granting the SQL server the ability to access the encryption key in Azure Key Vault. This requires assigning a managed identity to the server and then granting that identity appropriate permissions. Finally, the TDE protector must be configured to utilize that key. The order is critical because you can't grant permissions or configure TDE until the server has an identity. The discussion reflects some disagreement on the exact wording and grouping of these steps, with some users suggesting alternative phrasing of the actions. **Why other options are incorrect (or less optimal):** Simply adding the key without establishing the appropriate managed identity and permissions won't work. The system needs to authenticate the SQL server's request to access the key. The order reflects the dependency of each step on the previous ones. For example, step 4 cannot be performed until steps 1-3 are completed.
436
[View Question](https://www.examtopics.com/discussions/databricks/view/82397-exam-az-500-topic-5-question-47-discussion/) You need to create a web app named Intranet12345678 and enable users to authenticate to the web app by using Azure Active Directory (Azure AD). To complete this task, sign in to the Azure portal. Describe the steps to create and configure the web application for Azure AD authentication.
There are several valid approaches to achieving Azure AD authentication for a web app, as evidenced by the differing instructions in the discussion. The core steps remain consistent, but the exact interface and options might vary depending on the Azure portal version. **A common approach (combining elements from various provided solutions):** 1. **Create the Web App:** In the Azure portal, create a new Web App resource. Name it "Intranet12345678". Choose your subscription, resource group (create a new one if needed), and other necessary settings (runtime stack etc.). 2. **Navigate to Authentication Settings:** Once the web app is deployed, navigate to its settings. The exact location might vary slightly depending on the Azure portal version; it's likely found under "Settings" -> "Authentication" or a similarly named section. 3. **Add Identity Provider:** Add a new identity provider. Select "Microsoft" as the identity provider. This will allow authentication using Azure AD accounts. 4. **Configure Azure AD Settings:** This will involve linking your web app to your Azure AD tenant and potentially creating a new App Registration in Azure AD, or selecting an existing one. You will need to configure how Azure AD handles authentication requests (e.g., requiring authentication for all access). 5. **Save Changes:** Save the authentication configuration changes. **Why other options might be partially incorrect:** The provided solutions offer slight variations in steps and terminology due to Azure portal updates. Some steps, like the initial creation of the web app (through "Create a resource" vs. the older "App services" approach), reflect changes in the Azure interface over time. While the underlying functionality remains the same, the exact menus and button names might have evolved. All the provided answers achieve the same end result, however, the correct steps depend on the Azure portal version.
437
[View Question](https://www.examtopics.com/discussions/databricks/view/82489-exam-az-500-topic-4-question-48-discussion/) You use Microsoft Defender for Cloud for the centralized policy management of three Azure subscriptions. You use several policy definitions to manage the security of the subscriptions. You need to deploy the policy definitions as a group to all three subscriptions. Solution: You create a policy initiative and an assignment that is scoped to the Tenant Root Group management group. Does this meet the goal? A. Yes B. No
A. Yes This solution meets the goal. Creating a policy initiative and assigning it to the Tenant Root Group management group correctly deploys the policy definitions to all three subscriptions. A policy initiative allows grouping multiple policy definitions, and assigning it at the Tenant Root level ensures all subscriptions under that tenant inherit the policies. Why other options are incorrect: B. No - This is incorrect because the described solution effectively deploys the policy definitions across all three subscriptions. Note: While one user commented that scoping the initiative to the Tenant Root Group is a "bad idea," the question only asks if the solution meets the goal, not whether it is best practice. The consensus among the responses is that the solution does achieve the stated objective.
438
[View Question](https://www.examtopics.com/discussions/databricks/view/82533-exam-az-500-topic-5-question-34-discussion/) You need to ensure that the rg1lod1234578n1 Azure Storage account is encrypted by using a key stored in the KeyVault12345678 Azure key vault. To complete this task, sign in to the Azure portal. What steps are required to achieve this encryption using customer-managed keys?
To encrypt the rg1lod1234578n1 Azure Storage account using a key from KeyVault12345678, follow these steps within the Azure portal: 1. **Navigate to the storage account:** Locate and select the storage account, rg1lod1234578n1. 2. **Access Encryption settings:** In the storage account settings, find and select the "Encryption" option. 3. **Choose Customer-Managed Keys:** Select the "Use your own key" option. This indicates you'll be using a key from an external key vault. 4. **Specify the Key Vault:** Choose "Select from Key Vault". 5. **Select the Key Vault and Key:** Select KeyVault12345678 as the key vault, and then choose the specific key within that vault that you wish to use for encryption. The provided images show the "Use your own key" option and the subsequent selection of a key vault and key. Note that some users in the discussion comment that the images in the original question are outdated; however, the core steps remain consistent with current Azure functionality. One user suggests creating a user-managed identity and assigning it the Key Vault Encryption Officer role. While not strictly necessary in all cases (depending on permissions already assigned), this additional step might be required in certain Azure environments and is a valid approach to ensure successful encryption.
439
[View Question](https://www.examtopics.com/discussions/databricks/view/82628-exam-az-500-topic-4-question-81-discussion/) You are troubleshooting a security issue for an Azure Storage account. You enable Azure Storage Analytics logs and archive it to a storage account. What should you use to retrieve the diagnostics logs? A. Azure Cosmos DB explorer B. Azure Monitor C. Microsoft Defender for Cloud D. Azure Storage Explorer
D. Azure Storage Explorer Azure Storage Explorer is a dedicated tool designed for managing and interacting with Azure Storage data. It provides a user-friendly interface to browse, download, and manage various Azure Storage resources, including logs. Since the logs are stored in an Azure Storage account, Azure Storage Explorer is the most appropriate tool for retrieving them. Why other options are incorrect: * **A. Azure Cosmos DB explorer:** This tool is for managing Azure Cosmos DB databases, not Azure Storage accounts. * **B. Azure Monitor:** While Azure Monitor can collect and display logs from various Azure services, it is not the primary or most direct tool for accessing logs specifically stored within an Azure Storage account. * **C. Microsoft Defender for Cloud:** This focuses on security alerts and threat detection, not directly on retrieving storage logs. Note: The discussion shows a strong consensus that the correct answer is D, Azure Storage Explorer. However, some users also mention AzCopy as a possible alternative, suggesting a potential ambiguity or slight variation in acceptable answers depending on context.
440
[View Question](https://www.examtopics.com/discussions/databricks/view/82702-exam-az-500-topic-14-question-2-discussion/) From Microsoft Defender for Cloud, you need to deploy SecPol1. What should you do first? A. Enable Microsoft Defender for Cloud. B. Create an Azure Management group. C. Create an initiative. D. Configure continuous export.
A. Enable Microsoft Defender for Cloud. The discussion indicates that before deploying SecPol1 using Microsoft Defender for Cloud, you must first ensure that Microsoft Defender for Cloud is enabled. Several users highlighted that the question doesn't explicitly state that Defender for Cloud is already enabled, making its enablement the necessary first step. Why other options are incorrect: * **B. Create an Azure Management group:** While management groups are important for managing Azure resources, they are not a prerequisite for enabling or using Microsoft Defender for Cloud. Creating a management group would be a separate, potentially earlier, step in overall Azure governance, but not directly related to deploying SecPol1 *from* Defender for Cloud. * **C. Create an initiative:** Creating an initiative is a step *within* Microsoft Defender for Cloud to manage policies. However, you can't create an initiative unless Defender for Cloud is enabled and operational. Several users debated this point, demonstrating some ambiguity in the question. * **D. Configure continuous export:** This refers to exporting data from Defender for Cloud, which is a separate function unrelated to deploying a security policy like SecPol1. This would be done *after* Defender for Cloud is enabled and the policy deployed. Note: There is some disagreement amongst users regarding whether Defender for Cloud is assumed to be enabled. The most logical interpretation, given the phrasing "From Microsoft Defender for Cloud," is that its enablement is the implicit first step. The question's ambiguity is a source of the differing answers within the discussion.
441
[View Question](https://www.examtopics.com/discussions/databricks/view/82704-exam-az-500-topic-5-question-53-discussion/) You have an on-premises network and an Azure subscription. You have the Microsoft SQL Server instances shown in the following table. | Server Name | Location | Version | Edition | |---|---|---|---| | sql1 | On-premises | 2019 | Enterprise | | sql2 | On-premises | 2017 | Standard | | sql3 | Azure | 2019 | Enterprise | | sql4 | Azure | 2017 | Standard | You plan to implement Microsoft Defender for SQL. Which SQL Server instances will be protected by Microsoft Defender for SQL? A. sql1 and sql2 only B. sql1, sql2, and sql3 only C. sql1, sql2, and sql4 only D. sql1, sql2, sql3, and sql4
D. sql1, sql2, sql3, and sql4 Microsoft Defender for SQL can protect both on-premises and Azure SQL Server instances, regardless of the version (2017 or 2019) or edition (Standard or Enterprise). The provided text and image show that all four SQL servers (sql1, sql2, sql3, and sql4) meet the criteria for protection by Microsoft Defender for SQL. Therefore, all four will be protected. Why other options are incorrect: Options A, B, and C incorrectly exclude one or more of the SQL Server instances. There is no indication in the provided information that any of the listed servers would *not* be protected. Note: The provided Microsoft Learn link in the discussion only refers to the Azure SQL instances which are supported. However, the question clearly states that there are both on-premises and Azure SQL servers. The accepted answer is based on the general understanding of Microsoft Defender for SQL's capabilities.
442
[View Question](https://www.examtopics.com/discussions/databricks/view/82841-exam-az-500-topic-5-question-3-discussion/) HOTSPOT - You have an Azure subscription that contains a web app named App1 and an Azure key vault named Vault1. You need to configure App1 to access the secrets in Vault1. How should you configure App1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0046100001.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0046200001.png)
The correct configuration involves using Application Settings within the App1 web app configuration. Specifically, create a configuration setting that references the Key Vault secret using the syntax `@Microsoft.KeyVault(VaultName=yourkeyvaultname;SecretName=yoursecretname)`. This allows App1 to retrieve the secret from Vault1 without needing explicit credentials within the application code. Why other options are incorrect: The provided images and discussion do not offer alternative options to evaluate. The solution presented is the only one discussed and confirmed as correct by multiple users. Note: While the provided answer is supported by multiple users who reported it as correct on an exam, there is some discussion about the ongoing relevance of the question. The validity of this answer should be considered in the context of potential future updates or changes in Azure services.
443
**** [View Question](https://www.examtopics.com/discussions/databricks/view/82915-exam-az-500-topic-3-question-62-discussion/) You have an Azure subscription that is linked to an Azure Active Directory (Azure AD). The tenant contains the users shown in the following table. [Image of user table is missing, but the relevant information from the discussion is included below] You have an Azure key vault named Vault1 that has Purge protection set to Disable. Vault1 contains the access policies shown in the following table. [Image of access policies table is missing, but the relevant information from the discussion is included below] You create role assignments for Vault1 as shown in the following table. [Image of role assignments table is missing, but the relevant information from the discussion is included below] For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. **Statement 1:** User1 can disable purge protection on Vault1. **Statement 2:** User2 can change the network settings of Vault1. **Statement 3:** User3 can add an access policy to Vault1.
* **Statement 1: No.** User1 is a Security Administrator. While a Security Administrator has broad security-related permissions, they do *not* have the permissions to manage key vault properties, specifically enabling or disabling purge protection. The discussion highlights that even if User1 were in a group with seemingly relevant permissions, those permissions wouldn't extend to modifying purge protection settings. The ability to change purge protection is not granted through inherited permissions. The suggested answer incorrectly states that a Resource Policy Contributor or Security Administrator is required; this is not true for disabling purge protection, which is a key vault setting, not a resource policy setting. * **Statement 2: No.** User2 is a Network Contributor with additional permissions (Select All Key, Secret & Certificate permissions, and Key Vault Reader). While a Network Contributor can manage network resources, this role does *not* grant permissions to change a Key Vault's network settings (like firewall rules). The Key Vault Reader role similarly lacks the required permissions. * **Statement 3: Yes.** User3 is a Key Vault Contributor and a User Access Administrator for Vault1. The Key Vault Contributor role allows management of key vaults, including adding access policies. The discussion clarifies that although the Key Vault Contributor role doesn't grant permission to assign *Azure RBAC roles*, it *does* allow adding access policies, which is the action in question. The suggested answer incorrectly claims Key Vault Contributor does not allow access to secrets, keys, or certificates; the access policy grants those permissions in this case. There is some disagreement in the discussion regarding the exact permissions required for managing purge protection; some users believe it's impossible to change after vault creation, regardless of permissions. However, the consensus points towards the inability of the provided roles to change the setting.
444
[View Question](https://www.examtopics.com/discussions/databricks/view/83046-exam-az-500-topic-5-question-51-discussion/) DRAG DROP - You have an Azure subscription that contains an Azure SQL database named SQLDB1. SQLDB1 contains the columns shown in the following table. | Column Name | Data Type | |---|---| | Email | varchar(255) | | Birthday | date | For the Email and Birthday columns, you implement dynamic data masking by using the default masking function. Which value will the users see in each column? To answer, drag the appropriate values to the correct columns. Each value may be used once, more than once, or not at all. Select and Place: aXXXX@XXXX.com 1900-01-01 XXXX
* **Email:** aXXXX@XXXX.com * **Birthday:** 1900-01-01 The default dynamic data masking function in Azure SQL Database handles different data types differently. For `varchar` data types (like Email), the default masking replaces most characters with 'X', leaving only the first character visible in this case. The `@` symbol and the `.com` are often treated as constants. For `date` data types (like Birthday), the default masking replaces the entire date with '1900-01-01'. There is some disagreement in the discussion regarding the masking of the email address. While the provided answer and the highly-voted answers in the discussion align with the behavior of replacing the bulk of the email address with 'X', zellck points out that 'XXXX' is used only if the string data type is fewer than 4 characters long. However, given the table specifies `varchar(255)`, the first character visible behavior is considered correct in this case. Why other options are incorrect: * **XXXX (for Email):** While 'XXXX' is used for string data types shorter than 4 characters, the email field is longer. The default masking for longer strings doesn't simply replace everything with 'XXXX'. This is clarified in the discussion.
445
**** [View Question](https://www.examtopics.com/discussions/databricks/view/83159-exam-az-500-topic-5-question-8-discussion/) SIMULATION - You need to ensure that User2-1234578 has all the key permissions for KeyVault1234578. To complete this task, sign in to the Azure portal and modify the Azure resources. **
** The suggested answer of assigning the "Key Vault Secrets Officer" role is incorrect, according to the discussion. There is disagreement on the best approach. Several commenters suggest different roles, including "Key Vault Crypto Officer" (to manage keys), and "Key Vault Administrator" (to manage all permissions). A custom role may also be needed depending on specific requirements. The "Key Vault Secrets Officer" role only allows management of secrets, not keys, as pointed out by several users. The most comprehensive solution, according to some users, is to assign the "Key Vault Administrator" role, providing full access. However, this should be carefully considered due to the broad permissions it grants. **Why other options are incorrect (or debated):** * **Key Vault Secrets Officer:** Only manages secrets, not keys. * **Key Vault Crypto Officer:** Manages keys, but might not cover all permissions. * **Custom Role:** While potentially the most precise solution, it requires more effort to create and configure. * The ambiguity around Access Policies vs. RBAC further complicates the situation. **Note:** The provided discussion reveals significant disagreement amongst users on the correct approach. The best answer depends on the precise definition of "all key permissions" and the chosen access control mechanism (RBAC vs. Access Policies). The "Key Vault Administrator" role is a powerful option giving comprehensive control, but might be considered overly permissive depending on security requirements.
446
[View Question](https://www.examtopics.com/discussions/databricks/view/83486-exam-az-500-topic-2-question-75-discussion/) SIMULATION - You need to ensure that a user named user2-12345678 can manage the properties of the virtual machines in the RG1lod12345678 resource group. The solution must use the principle of least privilege. To complete this task, sign in to the Azure portal.
The correct solution is to assign the user `user2-12345678` the "Virtual Machine Contributor" role within the `RG1lod12345678` resource group. This is achieved through the Azure portal by following these steps: 1. Sign in to the Azure portal. 2. Browse to Resource Groups. 3. Select the RG1lod12345678 resource group. 4. Select Access control (IAM). 5. Select Add > role assignment. 6. Select Virtual Machine Contributor (you can filter the list of available roles by typing 'virtual' in the search box) then click Next. 7. Select the +Select members option and select user2-12345678 then click the Select button. 8. Click the Review + assign button twice. This approach adheres to the principle of least privilege because the "Virtual Machine Contributor" role grants only the necessary permissions to manage VM properties, avoiding the assignment of broader, potentially more risky roles. Other options are not provided in the original text, so their incorrectness cannot be evaluated. However, any role granting more extensive permissions than managing VM properties would violate the "least privilege" requirement. The discussion shows strong consensus that this is the correct answer, with multiple users reporting success in lab environments.
447
[View Question](https://www.examtopics.com/discussions/databricks/view/83635-exam-az-500-topic-2-question-79-discussion/) You have an Azure subscription that contains an Azure Files share named share1 and a user named User1. Identity-based authentication is configured for share1. User1 attempts to access share1 from a Windows 10 device by using SMB. Which type of token will Azure Files use to authorize the request? A. OAuth 2.0 B. JSON Web Token (JWT) C. SAML D. Kerberos
D. Kerberos Azure Files uses Kerberos for authorization when identity-based authentication is configured and access is attempted via SMB from a Windows 10 device. Kerberos is a network authentication protocol that provides strong authentication for client/server applications by using tickets to prove the identity of the client. This is the standard authentication method for SMB in Windows environments integrated with Active Directory. Why other options are incorrect: * **A. OAuth 2.0:** OAuth 2.0 is an authorization framework, not an authentication protocol. While it can be used with Azure services, it's not directly used for SMB authentication in this scenario. * **B. JSON Web Token (JWT):** JWT is used for authorization in many web APIs, but it's not the primary authentication mechanism used by Azure Files with SMB and Kerberos. * **C. SAML:** SAML (Security Assertion Markup Language) is typically used for web-based authentication and authorization, not for SMB access. Note: The discussion section overwhelmingly supports Kerberos as the correct answer, with numerous users reporting it as the correct answer on their exams.
448
[View Question](https://www.examtopics.com/discussions/databricks/view/84092-exam-az-500-topic-4-question-49-discussion/) You have an Azure environment. You need to identify any Azure configurations and workloads that are non-compliant with ISO 27001:2013 standards. What should you use? A. Azure Sentinel B. Azure Active Directory (Azure AD) Identity Protection C. Microsoft Defender for Cloud D. Microsoft Defender for Identity
C. Microsoft Defender for Cloud Microsoft Defender for Cloud offers a regulatory compliance dashboard that helps assess and manage compliance with various standards, including ISO 27001:2013. It provides insights into potential non-compliance issues within your Azure environment. Why other options are incorrect: * **A. Azure Sentinel:** Focuses primarily on threat detection and security information and event management (SIEM), not directly on regulatory compliance assessments. * **B. Azure Active Directory (Azure AD) Identity Protection:** Concentrates on identifying and mitigating identity-related risks, not broader compliance across the entire Azure environment. * **D. Microsoft Defender for Identity:** Focuses on identity security within on-premises and hybrid environments, not directly addressing overall Azure regulatory compliance. Note: There is some disagreement in the discussion, with one user suggesting "Purview" as the answer. However, the overwhelming consensus among the users supports Microsoft Defender for Cloud as the most appropriate solution for identifying ISO 27001:2013 compliance issues within an Azure environment. The provided explanation reflects the majority viewpoint.
449
[View Question](https://www.examtopics.com/discussions/databricks/view/84097-exam-az-500-topic-2-question-29-discussion/) You are tasked with creating a web app named App12345678 and publishing it to `https://www.contoso.com`. You need to register App12345678 in Azure Active Directory (Azure AD) and generate a password for it. Using the Azure portal, what steps are required to complete these tasks? The image below shows part of the process. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0010300001.jpg)
The steps to register the application App12345678 in Azure AD and generate a password (client secret) are as follows: 1. **Register the Application:** * Sign in to the Azure portal. * Navigate to Azure Active Directory -> App registrations -> New registration. * Name the application "12345678". * Select a supported account type (this determines who can use the application). * Under Redirect URI, select "Web" and enter the URI: `https://www.contoso.com`. * Click Register. 2. **Create a new application secret:** * Select Certificates & secrets. * Select Client secrets -> New client secret. * Provide a description and duration for the secret. * Select Add. The client secret value will be displayed; copy this value immediately as it cannot be retrieved later. **Why other options are incorrect:** There is some disagreement in the discussion regarding the manual addition of secrets. Some users suggest that the secret is automatically generated by Azure, while the provided solution suggests manual creation. The steps above reflect the solution provided, which involves manually creating the secret. The correct approach may depend on Azure AD configuration and the specific version.
450
[View Question](https://www.examtopics.com/discussions/databricks/view/84363-exam-az-500-topic-3-question-28-discussion/) You have an Azure virtual machine named VM1. From Microsoft Defender for Cloud, you get the following high-severity recommendation: `Install endpoint protection solutions on virtual machine`. You need to resolve the issue causing the high-severity recommendation. What should you do? A. Add the Microsoft Antimalware extension to VM1. B. Install Microsoft System Center Security Management Pack for Endpoint Protection on VM1. C. Add the Network Watcher Agent for Windows extension to VM1. D. Onboard VM1 to Microsoft Defender for Endpoint.
D. Onboard VM1 to Microsoft Defender for Endpoint. Microsoft Defender for Endpoint provides comprehensive endpoint protection, including advanced threat protection, which is likely what the high-severity recommendation from Defender for Cloud is requesting. Onboarding VM1 directly addresses the recommendation to install endpoint protection solutions. Option A is incorrect because while the Microsoft Antimalware extension offers some protection, it's a more basic solution than Microsoft Defender for Endpoint and may not meet the requirements for a high-severity recommendation. Option B is incorrect because Microsoft System Center is an on-premises solution and not designed for Azure VMs. Option C is incorrect because the Network Watcher Agent is for network monitoring and troubleshooting, not endpoint protection.
451
[View Question](https://www.examtopics.com/discussions/databricks/view/84535-exam-az-500-topic-2-question-76-discussion/) SIMULATION - You need to create a new Azure Active Directory (Azure AD) directory named 12345678.onmicrosoft.com. The new directory must contain a new user named [email protected] To complete this task, sign in to the Azure portal.
The provided steps in the suggested answer accurately describe how to create a new Azure AD directory and a user within it. The process involves first creating the Azure AD tenant using the Azure portal, specifying "12345678" for both the organization name and initial domain name. Then, a new user, "User1" (based on the provided solution), is created with an auto-generated password. The discussion highlights some uncertainty regarding the "Initial domain name" field. Some users suggest it should be "onmicrosoft.com," while the suggested answer uses "12345678." The provided solution explicitly uses "12345678" and multiple users have confirmed this approach works in a lab environment. Therefore, while there is some conflicting opinion in the discussion, the provided solution and its explanation are considered correct based on the available information and user testing results. The final domain name will be 12345678.onmicrosoft.com as per the question requirements. Why other options are incorrect: There aren't explicitly stated alternative options, but the discussion implies a possible incorrect option of using "onmicrosoft.com" as the initial domain name instead of "12345678". This would result in an incorrectly named directory.
452
[View Question](https://www.examtopics.com/discussions/databricks/view/84722-exam-az-500-topic-4-question-33-discussion/) You use Microsoft Defender for Cloud for the centralized policy management of three Azure subscriptions. You use several policy definitions to manage the security of the subscriptions. You need to deploy the policy definitions as a group to all three subscriptions. Solution: You create a policy initiative and assignments that are scoped to resource groups. Does this meet the goal? A. Yes B. No
B. No. Scoping policy initiatives and assignments to resource groups only applies the policies to the resources *within* those resource groups. It does not apply the policies to the entire subscription. To deploy policy definitions as a group to all three subscriptions, the policy initiative and assignments should be scoped to a management group which encompasses all three subscriptions. The consensus in the discussion supports this answer.
453
[View Question](https://www.examtopics.com/discussions/databricks/view/84873-exam-az-500-topic-4-question-34-discussion/) You use Microsoft Defender for Cloud for the centralized policy management of three Azure subscriptions. You use several policy definitions to manage the security of the subscriptions. You need to deploy the policy definitions as a group to all three subscriptions. Solution: You create a policy definition and assignments that are scoped to resource groups. Does this meet the goal? A. Yes B. No
B. No Creating policy definitions and assignments scoped to resource groups will *not* deploy the policy definitions to all three subscriptions. Resource groups are containers for resources within a subscription, not across subscriptions. To deploy the policy definitions to all three subscriptions, you would need to scope the policy assignments to the management group that contains all three subscriptions, or individually assign the policies to each subscription. The consensus in the discussion supports answer B. WHY OTHER OPTIONS ARE INCORRECT: A. Yes: This is incorrect because scoping policy definitions to resource groups only applies the policies within those specific resource groups, not across entire subscriptions, let alone multiple subscriptions. Therefore, the goal of deploying to all three subscriptions is not met.
454
[View Question](https://www.examtopics.com/discussions/databricks/view/84902-exam-az-500-topic-5-question-32-discussion/) You need to prevent HTTP connections to the rg1lod1234578n1 Azure Storage account. To complete this task, sign in to the Azure portal. What steps should you take? The image below shows part of the Azure portal interface. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0050700001.jpg)
To prevent HTTP connections to the rg1lod1234578n1 Azure Storage account, you should enable "Secure transfer required" in the Azure portal. 1. In the Azure portal, locate and select the Azure Storage account named `rg1lod12345678n1`. 2. Navigate to **Settings**, then **Configuration**. 3. Find the setting "Secure transfer required" and set it to **Enabled**. This setting enforces all requests to the storage account to use HTTPS, preventing unencrypted HTTP connections. There is some disagreement in the discussion regarding the default state of "Secure transfer required". Some users indicate it's enabled by default upon storage account creation, while others imply it's disabled. The provided solution focuses on enabling the setting to ensure secure transfer regardless of its initial state.
455
[View Question](https://www.examtopics.com/discussions/databricks/view/85209-exam-az-500-topic-4-question-32-discussion/) You use Microsoft Defender for Cloud for the centralized policy management of three Azure subscriptions. You use several policy definitions to manage the security of the subscriptions. You need to deploy the policy definitions as a group to all three subscriptions. Solution: You create an initiative and an assignment that is scoped to a management group. Does this meet the goal? A. Yes B. No
A. Yes Creating an initiative and assigning it to a management group that encompasses the three subscriptions will successfully deploy the grouped policy definitions to all three subscriptions. An initiative allows for the grouping of multiple policy definitions, streamlining management. Assigning it at the management group level applies the policies to all child subscriptions (unless explicitly overridden at a lower level). WHY OTHER OPTIONS ARE INCORRECT: B. No: This is incorrect because assigning a policy initiative to a management group is a valid and effective method for applying policies to multiple subscriptions. NOTE: While the answer is definitively A, there's a slight nuance reflected in the discussion. ITFranz mentions that although a policy can be assigned at the management group level, only resources at the subscription or resource group level are evaluated. This doesn't negate the effectiveness of using a management group for assignment; it simply clarifies that the policy's effects are evaluated at the subscription/resource group level, not the management group level itself. The core functionality of applying policies to multiple subscriptions via a management group assignment remains correct.
456
[View Question](https://www.examtopics.com/discussions/databricks/view/85393-exam-az-500-topic-4-question-80-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains a user named User1. You plan to enable passwordless authentication for the tenant. You need to ensure that User1 can enable the combined registration experience. The solution must use the principle of least privilege. Which role should you assign to User1? A. Security administrator B. Privileged role administrator C. Authentication administrator D. Global administrator
The discussion reveals conflicting information and acknowledges the existence of both old and new methods for enabling passwordless authentication. The original suggested answer, D. Global administrator, is deemed correct based on the *old* method. However, the discussion highlights that the *new* method requires at least an Authentication Policy Administrator role. Therefore, using the principle of least privilege, **Authentication administrator (C)** would be the more appropriate answer for the *new* method. The question's ambiguity about which method is being referenced necessitates this nuanced response. Options A and B are incorrect because they do not directly manage authentication policies.
457
**** [View Question](https://www.examtopics.com/discussions/databricks/view/85522-exam-az-500-topic-4-question-84-discussion/) You have an Azure subscription that contains a blob container named cont1. Cont1 has the access policies shown in the following exhibit. *(Note: The original question included images showing access policies; this text is a placeholder as the images are not directly accessible)*. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Box 1: Maximum number of stored access policies per blob container: [Dropdown Menu] Box 2: Number of Legal Hold policies supported by a blob version: [Dropdown Menu] **
** Box 1: 5 Box 2: 1 **Explanation:** * **Box 1:** The documentation and user discussion consistently state that a container can have a maximum of 5 stored access policies. Attempting to set more than five results in a 400 error. * **Box 2:** A blob version supports only one legal hold policy at a time. While a time-based retention policy is also mentioned, the question specifically asks about *legal hold* policies. The documentation supports this, indicating that a blob version can have *one* legal hold and *one* immutability policy. There is some discussion about this aspect, and some users mention only one of these can be applied; however, the documentation and the provided answer supports one legal hold and one immutability policy simultaneously. **Why other options are incorrect:** Any answer other than 5 for Box 1 and 1 for Box 2 contradicts the provided Microsoft documentation and the consensus among users in the discussion. The discussion highlights some confusion regarding time-based retention and legal holds, but the final answer clarifies the distinction, as per the provided official documentation.
458
[View Question](https://www.examtopics.com/discussions/databricks/view/85575-exam-az-500-topic-3-question-21-discussion/) You need to configure a Microsoft SQL server named Web1234578 to only accept connections from the Subnet0 subnet on the VNET01 virtual network. To complete this task, sign in to the Azure portal. How should you configure the SQL Server?
To configure the Microsoft SQL Server named Web1234578 to only accept connections from the Subnet0 subnet on the VNET01 virtual network, you should perform the following steps: 1. **Access the SQL Server:** In the Azure portal, search for and select the SQL Server named "web1234578". 2. **Navigate to Firewall and Virtual Networks:** In the SQL Server's properties, locate and click on "Firewalls and virtual networks". Note that a user mentioned this setting might now be located under "SQL SERVER > Networking". 3. **Add a Virtual Network Rule:** Click "Add existing" in the Virtual networks section. This opens a window to create a new rule. 4. **Configure the Rule:** Name the rule (e.g., "Allow_VNET01-Subnet0"). Select "VNET01" as the Virtual network and "Subnet0" as the Subnet name. 5. **Save the Rule:** Click "OK" to save the new rule. 6. **Enable Azure Services Access:** Ensure that "Allow access to Azure services" is set to "On". This configuration restricts access to the SQL server to only clients within the specified subnet, enhancing security. Note: There is some disagreement on the exact location of the necessary settings within the Azure portal. While the suggested answer points to "Firewalls and virtual networks," a user comment indicates it may now be under "SQL SERVER > Networking". The answer reflects both possibilities to account for this variation.
459
[View Question](https://www.examtopics.com/discussions/databricks/view/85925-exam-az-500-topic-5-question-52-discussion/) You have a hybrid Azure Active Directory (Azure AD) tenant named contoso.com that contains a user named User1 and the servers shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0053200001.png) The tenant is linked to an Azure subscription that contains a storage account named storage1. The storage1 account contains a file share named share1. User1 is assigned the Storage File Data SMB Share Contributor role for storage1. The Security protocol settings for the file shares of storage1 are configured as shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0053300001.jpg) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04258/0053400001.jpg)
The correct answer is No, Yes, No. * **Statement 1: User1 can access share1 using only their Azure AD credentials.** This is No. While User1 has the necessary role, the provided image shows that NTLMv2 is disabled. Kerberos, which requires user credentials, is enabled, but access using *only* Azure AD credentials (without Kerberos authentication) is not possible given the disabled NTLMv2. Access using only Azure AD credentials implies using only an access key. However Kerberos uses user credentials, not access keys. * **Statement 2: User1 can access share1 using Kerberos authentication.** This is Yes. Kerberos authentication is enabled and User1 has the necessary permissions (Storage File Data SMB Share Contributor role). * **Statement 3: User1 can access share1 using only their storage account access keys.** This is No. The configuration disables NTLMv1 and NTLMv2, the protocols typically used with storage account access keys for SMB access. While User1 has the necessary role, Kerberos is an authentication method which uses credentials, not access keys. There is a consensus in the discussion that the answer is No, Yes, No. The reasoning provided by the highly-voted comments supports this conclusion.
460
**** [View Question](https://www.examtopics.com/discussions/databricks/view/8683-exam-az-500-topic-2-question-16-discussion/) You have a hybrid configuration of Azure Active Directory (Azure AD). You have an Azure HDInsight cluster on a virtual network. You plan to allow users to authenticate to the cluster by using their on-premises Active Directory credentials. You need to configure the environment to support the planned authentication. Solution: You deploy Azure Active Directory Domain Services (Azure AD DS) to the Azure subscription. Does this meet the goal? A. Yes B. No **
** A. Yes **Explanation:** Deploying Azure Active Directory Domain Services (Azure AD DS) to the Azure subscription does meet the goal of allowing users to authenticate to the HDInsight cluster using their on-premises Active Directory credentials. Azure AD DS provides a managed domain service in Azure that synchronizes with your on-premises Active Directory. This allows HDInsight to authenticate users against the synchronized directory, effectively bridging the on-premises and cloud environments. The discussion highlights that this is the supported method for HDInsight cluster integration with Active Directory. **Why other options are incorrect:** B. No: This is incorrect based on the provided documentation and the consensus in the discussion. While there's an initial dissenting opinion, the majority of the discussion points to Azure AD DS as the correct and supported solution. **Note:** There is some initial disagreement in the discussion regarding the solution's validity. However, the subsequent comments and provided links strongly support the answer "Yes," highlighting that using Azure AD DS is the officially supported method for achieving the described authentication scenario.
461
[View Question](https://www.examtopics.com/discussions/databricks/view/87250-exam-az-500-topic-3-question-20-discussion/) SIMULATION - You need to ensure that a user named Danny1234578 can sign in to any SQL database on a Microsoft SQL server named web1234578 by using SQL Server Management Studio (SSMS) and Azure Active Directory (Azure AD) credentials. To complete this task, sign in to the Azure portal.
To allow Danny1234578 to sign in to any SQL database on the web1234578 server using Azure AD credentials, you must add Danny1234578 as an Azure AD administrator for that SQL server. This is accomplished through the Azure portal by navigating to the SQL server, accessing its Active Directory Admin settings, and adding Danny1234578 as an administrator. The provided steps (1-6) in the suggested answer outline this process accurately. Note that while some users have suggested additional steps (such as ensuring "Microsoft Entra authentication only" is unticked), the core requirement is granting admin access. There is also some debate on whether admin privileges are strictly necessary; the question wording "sign in to any SQL database" doesn't explicitly state the level of access required, only the method of authentication. Why other options are incorrect: There is no discussion of other options to achieve this specific outcome. The discussion focuses on clarifying the steps involved in adding the user as an Azure AD admin, addressing a user's concern about unnecessary admin privileges, and pointing out a potential checkbox setting to consider. The core solution remains the same: add Danny1234578 as an Azure AD administrator for the server.
462
**** [View Question](https://www.examtopics.com/discussions/databricks/view/87328-exam-az-500-topic-4-question-54-discussion/) SIMULATION - You need to ensure that web1234578 is protected from malware by using Microsoft Antimalware for Virtual Machines and is scanned every Friday at 01:00. To complete this task, sign in to the Azure portal. **
** To protect the virtual machine "web1234578" from malware using Microsoft Antimalware and schedule a scan for every Friday at 01:00, follow these steps: 1. **Access the Azure Portal:** Log in to the Azure portal. 2. **Locate the Virtual Machine:** Search for and select "Virtual Machines," then choose "web1234578". Note that the suggested answer references "web11597200" which is likely a typo. The question refers to "web1234578". 3. **Access Extensions/Extensions + Applications:** Navigate to the VM's settings. The location of extensions may vary, possibly under "Extensions" or "Extensions + Applications," as noted in the discussion. 4. **Add the Microsoft Antimalware Extension:** Add the "Microsoft Antimalware" extension. 5. **Configure the Scan Schedule:** In the extension's settings, set the "Scan day" to "Friday" and the "Scan time" to "60" (representing 01:00 AM). 6. **Save and Install:** Save the configuration and install the extension. **Why other options are incorrect (N/A):** There were no other options provided in the original prompt. The discussion highlights a potential discrepancy in the location of the extensions within the Azure portal, which has been addressed in the answer. **Note:** The discussion reveals some uncertainty regarding the exact location of the "Extensions" setting within the Azure portal. The answer reflects this uncertainty by mentioning both potential locations ("Extensions" and "Extensions + Applications").
463
[View Question](https://www.examtopics.com/discussions/databricks/view/9101-exam-az-500-topic-2-question-23-discussion/) You plan to use Azure Resource Manager templates to perform multiple deployments of identically configured Azure virtual machines. The password for the administrator account of each deployment is stored as a secret in different Azure key vaults. You need to identify a method to dynamically construct a resource ID that will designate the key vault containing the appropriate secret during each deployment. The name of the key vault and the name of the secret will be provided as inline parameters. What should you use to construct the resource ID? A. a key vault access policy B. a linked template C. a parameters file D. an automation account
The correct answer is B, a linked template. A linked template allows for the dynamic construction of a resource ID, incorporating inline parameters for the key vault name and secret name. This enables retrieving credentials from different key vaults for each deployment. The provided Microsoft documentation links support this answer. Why other options are incorrect: * **A. a key vault access policy:** Access policies control permissions, not the dynamic selection of the key vault itself. * **C. a parameters file:** While parameters files provide input values, they do not support template expressions necessary for dynamic resource ID construction. The discussion highlights this limitation. * **D. an automation account:** Automation accounts are used for automating tasks, not for dynamically constructing resource IDs within ARM templates. Note: The discussion shows strong consensus on answer B, although some explanations of why other options are incorrect may vary slightly in detail.
464
[View Question](https://www.examtopics.com/discussions/databricks/view/9223-exam-az-500-topic-5-question-19-discussion/) You have an Azure subscription that contains an Azure key vault named Vault1. In Vault1, you create a secret named Secret1. An application developer registers an application in Azure Active Directory (Azure AD). You need to ensure that the application can use Secret1. What should you do? A. In Azure AD, create a role. B. In Azure Key Vault, create a key. C. In Azure Key Vault, create an access policy. D. In Azure AD, enable Azure AD Application Proxy.
C. In Azure Key Vault, create an access policy. To allow an application to access a secret in Azure Key Vault, you must create an access policy within the Key Vault itself. This policy grants specific permissions (like get, set, delete, etc.) to the application's identity (its service principal in Azure AD). The application's identity is then linked to the access policy, giving it the necessary authorization. Why other options are incorrect: * **A. In Azure AD, create a role:** While Azure AD roles manage access to Azure resources, they don't directly grant access to secrets within a Key Vault. Key Vault access is managed through its own access policies. * **B. In Azure Key Vault, create a key:** Keys and secrets are different entities in Azure Key Vault. Creating a key doesn't grant access to a secret. * **D. In Azure AD, enable Azure AD Application Proxy:** Application Proxy is used to publish on-premises applications to the cloud. It's not relevant to accessing secrets within an Azure Key Vault. Note: The provided discussion only confirms the correct answer. There's no conflicting opinion presented.
465
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94606-exam-az-500-topic-4-question-85-discussion/) You have an Azure subscription that contains a resource group named RG1 and the network security groups (NSGs) shown in the following table. [Image showing a table with NSG names and Flow logs status. The text within the image is not visible in this context.] You create the Azure policy shown in the following exhibit. [Image showing an Azure policy. The text within the image is not visible in this context, but the discussion indicates it's an audit policy with an exception for NSG1 and the effect is "Audit".] You assign the policy to RG1. What will occur if you assign the policy to NSG1 and NSG2? A. Flow logs will be enabled for NSG2 only. B. Flow logs will be disabled for NSG1 and NSG2. C. Flow logs will be enabled for NSG1 and NSG2. D. Flow logs will be enabled for NSG1 only. **
** B. Flow logs will be disabled for NSG1 and NSG2. The Azure policy described is an audit policy. Audit policies only *report* on non-compliance; they do not automatically take action to remediate the issue (e.g., enable flow logs). Because the policy is in audit mode and the flow logs are already disabled on both NSG1 and NSG2, assigning the policy will have no effect on their state. The policy will simply flag the non-compliance, but won't change the status. The exception for NSG1 within the policy is irrelevant in this scenario since the policy itself doesn't trigger remediation. **Why other options are incorrect:** * **A:** This is incorrect because audit policies do not enable flow logs; they only audit the current state. A "DeployIfNotExists" policy would be needed to enable them. * **C:** This is incorrect for the same reason as A. Audit policies don't actively modify the resource. * **D:** This is incorrect because the policy, being an audit policy, won't change anything about the flow logs configuration, and the exception for NSG1 in the policy is immaterial because the policy doesn't trigger changes. **Note:** The discussion shows consensus on answer B, although some users suggest more precise wording in the options to avoid ambiguity.
466
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94607-exam-az-500-topic-4-question-87-discussion/) You have a management group named MG1 that contains an Azure subscription and a resource group named RG1. RG1 contains a virtual machine named VM1. You have the custom Azure roles shown in the following table. ![Image](https://img.examtopics.com/az-500/image572.png) *(Image shows a table defining Role1 and Role2)* The permissions for Role1 are shown in the following role definition file. ![Image](https://img.examtopics.com/az-500/image573.png) *(Image shows Role1 definition including "NotActions")* The permissions for Role2 are shown in the following role definition file. ![Image](https://img.examtopics.com/az-500/image574.png) *(Image shows Role2 definition)* You assign the roles to the users shown in the following table. ![Image](https://img.examtopics.com/az-500/image575.png) *(Image shows User1 assigned Role1 and User2 assigned Role1 and Role2)* For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image576.png) *(Image shows three statements: Statement 1: User1 can read VM1. Statement 2: User2 can read VM1. Statement 3: User2 can write VM1.)* **
** No, Yes, Yes * **Statement 1 (User1 can read VM1): No.** User1 only has Role1 assigned, which explicitly excludes the "read" action on virtual machines through the `NotActions` property. * **Statement 2 (User2 can read VM1): Yes.** User2 has both Role1 and Role2 assigned. While Role1 excludes the "read" action, Role2 includes it. The consensus in the discussion is that `NotActions` are not "deny" rules, but rather a way to specify allowed actions by exclusion. Therefore, the more permissive Role2 grants the "read" permission. * **Statement 3 (User2 can write VM1): Yes.** User2 has Role2 assigned, which explicitly grants the "write" action on virtual machines. **Why other options are incorrect:** The discussion shows some initial disagreement on the answer, but the majority opinion, supported by references to Microsoft documentation, concludes that `NotActions` does not function as a deny rule; instead, it refines the permissions granted by explicitly excluding certain actions. This is why a user assigned multiple roles (one including and one excluding an action) will have access if any role grants the action.
467
[View Question](https://www.examtopics.com/discussions/databricks/view/94608-exam-az-500-topic-4-question-88-discussion/) You have an Azure Active Directory (Azure AD) tenant. You need to prevent nonprivileged Azure AD users from creating service principles in Azure AD. What should you do in the Azure Active Directory admin center of the tenant? A. From the User settings blade, set Users can register applications to No. B. From the Properties blade, set Access management for Azure resources to No. C. From the User settings blade, set Restrict access to Azure AD administration portal to Yes. D. From the Properties blade, set Enable Security defaults to Yes.
A. From the User settings blade, set Users can register applications to No. This setting directly controls whether users can register applications, which includes creating service principals. Setting this to "No" prevents non-privileged users from performing this action. Why other options are incorrect: * **B. From the Properties blade, set Access management for Azure resources to No.** This setting controls access to Azure resources, not the ability to register applications within Azure AD itself. * **C. From the User settings blade, set Restrict access to Azure AD administration portal to Yes.** This restricts access to the Azure AD admin portal, but doesn't prevent application registration if users have other means of accessing the necessary functionality. * **D. From the Properties blade, set Enable Security defaults to Yes.** While enabling security defaults enhances overall security, it doesn't specifically address the control over application registration by non-privileged users. It offers broader security controls but not the precise control needed in this scenario. Note: The provided discussion only supports option A as the correct answer. There is no dissenting opinion presented.
468
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94643-exam-az-500-topic-4-question-91-discussion/) You have an Azure subscription that contains a managed identity named Identity1 and the Azure key vaults shown in the following table. ![Image](https://img.examtopics.com/az-500/image587.png) KeyVault1 contains an access policy that grants Identity1 the following key permissions: • Get • List • Wrap • Unwrap You need to provide Identity1 with the same permissions for KeyVault2. The solution must use the principle of least privilege. Which role should you assign to Identity1? A. Key Vault Crypto Service Encryption User B. Key Vault Crypto User C. Key Vault Reader D. Key Vault Crypto Officer **
** A. Key Vault Crypto Service Encryption User The question specifies that the solution must adhere to the principle of least privilege. The `Key Vault Crypto Service Encryption User` role provides only the necessary permissions (Get, List, Wrap, Unwrap) to access and manipulate keys within the Key Vault, fulfilling the requirement. Other roles offer broader permissions that exceed the necessary actions, violating the least privilege principle. **Why other options are incorrect:** * **B. Key Vault Crypto User:** This role grants more permissions than required (e.g., sign, update, backup), exceeding the principle of least privilege. The discussion highlights this as a key reason why it is not the correct answer. * **C. Key Vault Reader:** While this role allows reading key properties, it doesn't grant the required "Wrap" and "Unwrap" permissions. * **D. Key Vault Crypto Officer:** This role provides extensive administrative control over Key Vault, far exceeding what's needed for the specified task. **Note:** The discussion shows some disagreement about the correct answer. Some argue for option B, while others strongly advocate for option A based on the principle of least privilege. The provided answer reflects the consensus and the most accurate interpretation of the principle of least privilege in this context.
469
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94659-exam-az-500-topic-5-question-68-discussion/) HOTSPOT You have a Microsoft Sentinel deployment. You need to connect a third-party security solution to the deployment. The third-party solution will send Common Event Format (CEF)-formatted messages. What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image619.png) *(Image content not provided, but the question indicates a multiple choice selection)* **
** A Linux server acting as a Syslog forwarder and the Azure Monitor agent. **Explanation:** The provided discussion and links clearly state that to ingest CEF logs into Microsoft Sentinel from devices where the Log Analytics agent cannot be directly installed, a Linux server configured as a Syslog forwarder is necessary. This server collects CEF logs and forwards them to the Microsoft Sentinel workspace via the Azure Monitor agent which is installed on the Linux server. The Azure Monitor agent is responsible for the connection to and ingestion by Sentinel. **Why other options are incorrect (if applicable):** The question is a multiple choice hotspot question, but only the correct answer and its explanation are provided in the source material. Without seeing the other choices offered in the original hotspot question, it is impossible to definitively explain why they are incorrect. However, any option lacking both a Linux server acting as a Syslog forwarder and the Azure Monitor agent would be incomplete and incorrect based on the provided information. **Note:** The provided solution is based solely on the information within the given context. There might be alternative approaches to connecting third-party security solutions that send CEF-formatted messages to Microsoft Sentinel, but these alternatives are not discussed in the provided source material.
470
[View Question](https://www.examtopics.com/discussions/databricks/view/94674-exam-az-500-topic-5-question-56-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains a group named Group1. You need to ensure that the members of Group1 sign in by using passwordless authentication. What should you do? A. Configure the sign-in risk policy. B. Create a Conditional Access policy. C. Configure the Microsoft Authenticator authentication method policy. D. Configure the certificate-based authentication (CBA) policy.
C. Configure the Microsoft Authenticator authentication method policy. The Microsoft Authenticator app allows for passwordless authentication using methods like push notifications or one-time passcodes. This directly addresses the requirement of enabling passwordless sign-in for members of Group1. Why other options are incorrect: * **A. Configure the sign-in risk policy:** Sign-in risk policies manage risk levels based on sign-in attempts, not authentication methods. They don't directly enable passwordless logins. * **B. Create a Conditional Access policy:** Conditional Access policies control access based on various conditions (location, device, etc.). While they can be *part* of a passwordless strategy (e.g., requiring MFA, which could include the Authenticator app), they don't directly *enable* passwordless authentication. * **D. Configure the certificate-based authentication (CBA) policy:** CBA is a different authentication method that uses digital certificates, not a passwordless approach using a mobile app like Microsoft Authenticator. Note: The provided discussion only offers the answer "C" without detailed explanation. There's no dissenting opinion presented.
471
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94675-exam-az-500-topic-5-question-57-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image593.png) You need to configure storage1 to regenerate keys automatically every 90 days. Which cmdlet should you run? A. Add-AzKeyVaultflanagedStorageAccount B. Set-AzStorageAccountManagementPolicy C. Set-AzStorageAccount D. Add-AzStorageAccountManagementPolicyAction **
** A. `Add-AzKeyVaultManagedStorageAccount` (Note the corrected spelling). The correct answer is A, `Add-AzKeyVaultManagedStorageAccount`, because this cmdlet is specifically designed to manage storage account keys within Azure Key Vault, allowing for the configuration of automatic key regeneration. The provided PowerShell example in the discussion further confirms this. **Why other options are incorrect:** * **B. `Set-AzStorageAccountManagementPolicy`:** This cmdlet manages storage account lifecycle policies (like tiering and deletion), not key rotation. * **C. `Set-AzStorageAccount`:** This cmdlet modifies general storage account settings, not specifically key rotation. * **D. `Add-AzStorageAccountManagementPolicyAction`:** This is not a valid cmdlet. **Note:** The discussion shows some disagreement among users regarding the correct answer. However, the most recent and upvoted responses, along with supporting documentation linked by randy0077 (`https://learn.microsoft.com/en-us/azure/key-vault/secrets/overview-storage-keys-powershell#:~:text=Enable%20key%20regeneration,the%20active%20key.`), strongly support option A as the correct solution. The misspelling in option A within the question itself is also addressed in the answer.
472
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94677-exam-az-500-topic-2-question-80-discussion/) You have an Azure subscription. You plan to create two custom roles named Role1 and Role2. The custom roles will be used to perform the following tasks: • Members of Role1 will manage application security groups. • Members of Role2 will manage Azure Bastion. You need to add permissions to the custom roles. Which resource provider should you use for each role? To answer, drag the appropriate resource providers to the correct roles. Each resource provider may be used, more than once, or not at all. **
** The correct resource provider for both Role1 (managing Application Security Groups) and Role2 (managing Azure Bastion) is **Microsoft.Network**. * **Explanation:** Application Security Groups and Azure Bastion are both network-related services within Azure. The Microsoft.Network resource provider encompasses the management of various networking components, including both of these. The provided links in the discussion confirm that the `Microsoft.Network` resource provider is responsible for managing both Application Security Groups and Azure Bastion Hosts. **Why other options are incorrect:** There are no other options explicitly presented in the question or suggested in the discussion. The question focuses solely on identifying the correct resource provider. **Note:** The discussion shows a consensus amongst users that `Microsoft.Network` is the correct answer.
473
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94678-exam-az-500-topic-2-question-85-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains 500 users and an administrative unit named AU1. From the Azure Active Directory admin center, you plan to add the users to AU1 by using Bulk add members. You need to create and upload a file for the bulk add. What should you include in the file? A. only the display name of each user B. only the user principal name (UPN) of each user C. only the user principal name (UPN) and display name of each user D. only the user principal name (UPN) and object identifier of each user E. only the object identifier of each user **
** B. only the user principal name (UPN) of each user **Explanation:** The most upvoted and supported answer in the discussion points to the user principal name (UPN) as the correct information to include in the bulk add file for adding existing users to an administrative unit. While some comments suggest that the Object ID also works, the consensus and the provided Microsoft documentation links primarily support the UPN as the sufficient and recommended method. **Why other options are incorrect:** * **A, C, D, E:** These options include additional information beyond the UPN. While not necessarily incorrect, they are not *required*. The UPN alone is sufficient for identifying and adding existing users to an administrative unit using the bulk add feature in Azure AD. The UPN uniquely identifies the user within the Azure AD tenant. **Note:** There is some disagreement in the discussion regarding whether the Object ID (option E and partially D) can also be used. The most popular response and several cited Microsoft links point to the UPN being the primary and sufficient identifier for this task, but the possibility of using the Object ID exists according to some contributors. Therefore, the answer prioritizes the most strongly supported and documented option.
474
[View Question](https://www.examtopics.com/discussions/databricks/view/94679-exam-az-500-topic-2-question-87-discussion/) You have an Azure subscription that contains a user named User1. You need to ensure that User1 can create managed identities. The solution must use the principle of least privilege. What should you do? A. Create a management group and assign User1 the Hybrid Identity Administrator Azure Active Directory (Azure AD) role. B. Create a management group and assign User1 the Managed Identity Operator role. C. Create a resource group and assign User1 to the Managed Identity Contributor role. D. Create an organizational unit (OU) and assign User1 the User administrator Azure Active Directory (Azure AD) role.
C. Create a resource group and assign User1 to the Managed Identity Contributor role. The Managed Identity Contributor role allows the creation, reading, updating, and deleting of user-assigned managed identities. This directly addresses the requirement and adheres to the principle of least privilege by granting only the necessary permissions. Why other options are incorrect: * **A. Create a management group and assign User1 the Hybrid Identity Administrator Azure Active Directory (Azure AD) role:** This role manages hybrid identity features (Azure AD Connect, etc.), not the creation of managed identities. It grants excessive permissions. * **B. Create a management group and assign User1 the Managed Identity Operator role:** This role allows reading and assigning user-assigned identities, but *not* creating them. It doesn't fulfill the requirement. * **D. Create an organizational unit (OU) and assign User1 the User administrator Azure Active Directory (Azure AD) role:** This role provides comprehensive user and group management, far exceeding the requirement and violating the principle of least privilege. Note: The discussion shows a consensus on the correct answer, with a clear explanation of why other options are inappropriate.
475
[View Question](https://www.examtopics.com/discussions/databricks/view/94681-exam-az-500-topic-2-question-91-discussion/) You have an Azure AD tenant that contains the identities shown in the following table. ![Image](https://img.examtopics.com/az-500/image561.png) You plan to implement Azure AD Identity Protection. What is the maximum number of user risk policies you can configure? A. 1 B. 90 C. 200 D. 265 E. 1000
The correct answer is **C. 200** or potentially **B. 90**, depending on the Azure AD license. The discussion shows conflicting answers and reasoning. Some users state that only one user risk policy is allowed per tenant (A). Others correctly state that the limit depends on the Azure AD Premium license; up to 90 for P1 and up to 200 for P2. The question does not specify the license type, making both 90 and 200 potentially valid answers, depending on the licensing. Because 200 is a higher limit, it's chosen as the more likely valid answer. If the exam were more specific regarding the license tier, the correct answer would be more definitive. Why other options are incorrect: * **A. 1:** Incorrect. While some users believe only one policy is allowed, this is contradicted by information stating the limit depends on the license tier. * **B. 90:** Potentially correct if the Azure AD tenant has a Premium P1 license. However, it's less likely to be correct than 200 as it represents the lower limit. * **D. 265:** There is no information supporting this number. * **E. 1000:** There is no information supporting this number. Note: There is disagreement among users regarding the correct answer due to the absence of license information in the question. The provided answer reflects the most likely correct answer based on the available information.
476
[View Question](https://www.examtopics.com/discussions/databricks/view/94687-exam-az-500-topic-4-question-90-discussion/) You have an Azure subscription that contains a user named User1 and a storage account named storage1. The storage1 account contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image583.png) In storage1, you create a shared access signature (SAS) named SAS1 as shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image584.png) To which resources can User1 write on July 1, 2022 by using SAS1 and key1? To answer select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image585.png)
Based on the provided images and the discussion, User1 can write to **folder1** using SAS1 and key1 on July 1, 2022. The SAS1 permissions (image584.png) grant write access to `folder1`. The key1 (as implied by the question and suggested answer) further specifies this access. While the discussion contains some debate on the interpretation of "container" in the context of different Azure storage services (blob, file, table, queue), the provided images and the suggested answer clearly indicate that "container" within the scope of this question refers to a directory/folder, specifically `folder1`. The discussion highlights a common misunderstanding regarding the term "container" in Azure storage. Some users correctly point out that "container" can refer to different things depending on the service (blob container, file share, table, queue). However, the context of the question and the provided images strongly suggest that "container" in this scenario refers to a folder within a file share. There is some disagreement in the discussion about the precise interpretation of the question but the suggested answer and image 586 seems to be a consensus. Why other options are incorrect: The question does not provide enough information to definitively determine write access to other resources (container1, table1). The provided SAS and key only explicitly grant write access to folder1.
477
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94724-exam-az-500-topic-5-question-59-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains two administrative units named AU1 and AU2. Users are assigned to the administrative units as shown in the following table. (Image 1: Shows a table with users and their assigned administrative units AU1 and AU2. Specific user details are not provided in the prompt). Users are assigned the roles shown in the following table. (Image 2: Shows a table with users and their assigned roles: Helpdesk Administrator, User Administrator, Group Administrator, Password Administrator. Specific user-role assignments are not provided in the prompt). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image 3: Shows three statements to evaluate as Yes/No: 1. A Helpdesk Administrator can reset the password for non-administrator users. 2. A Group Administrator can reset the password for any user. 3. A Password Administrator can reset the password for a User Administrator.) **
** 1. **Yes:** A Helpdesk Administrator can reset the password for non-administrator users. 2. **No:** A Group Administrator cannot reset the password for any user. 3. **No:** A Password Administrator cannot reset the password for a User Administrator. **Explanation:** The answer is based on the provided suggested answer and the reasoning given by user 'marpengar1' which references Microsoft documentation. The core logic is determined by the privileges associated with each Azure AD role. A Helpdesk Administrator has the permissions to reset passwords for non-administrative accounts. Group Administrators have limited password reset capabilities, not extending to all users. Similarly, Password Administrators' abilities don't encompass resetting passwords for other administrators (such as User Administrators). **Why other options are incorrect:** The suggested answers, supported by the user insights, directly address why each statement is true or false. The provided explanations clarify the permissions associated with each role type and explain why a given role cannot perform a specific action. **Note:** The provided context does not give the full details of the user-AU and user-role assignments in the tables, making a definitive answer dependent on the understanding of standard Azure AD role privileges. The provided answer assumes standard role assignments unless explicitly stated otherwise within the unseen tables. The discussion shows some disagreement; however, the highly upvoted response by 'marpengar1' offers a reasonable and well-supported answer consistent with Microsoft's documentation.
478
[View Question](https://www.examtopics.com/discussions/databricks/view/94725-exam-az-500-topic-5-question-60-discussion/) You have an Azure subscription that contains an Azure key vault named Vault1 and a virtual machine named VM1. VM1 has the Key Vault VM extension installed. For Vault1, you rotate the keys, secrets, and certificates. What will be updated automatically on VM1? A. the keys only B. the secrets only C. the certificates only D. the keys and secrets only E. the secrets and certificates only F. the keys, secrets, and certificates
C. the certificates only The Key Vault VM extension automatically updates certificates stored in Azure Key Vault. It does *not* automatically update keys or secrets. Therefore, only the certificates will be updated on VM1 when they are rotated in Vault1. Why other options are incorrect: * **A, B, D, E, F:** These options incorrectly suggest that keys and/or secrets are automatically updated by the Key Vault VM extension. The extension's primary function is certificate management. While the VM *can* be configured to use secrets and keys, it doesn't automatically update them upon rotation within the Key Vault. Note: The provided discussion only shows a selected answer (C) without a detailed explanation. This answer is based on general understanding of Azure Key Vault and its VM extension.
479
[View Question](https://www.examtopics.com/discussions/databricks/view/94729-exam-az-500-topic-5-question-64-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-500/image612.png) Both VM1 and VM2 connect to VNET1 and are configured to use NSG1. You need to ensure that only VM1 and VM2 can access DB1. What should you do? A. For NSG1, configure a rule that has a service tag. B. Add the IP address range of VNET1 to the Firewall settings of DB1. C. Create an application security group. D. Configure DB1 to allow access from only VNET1.
D. Configure DB1 to allow access from only VNET1. This is the best solution because it directly addresses the requirement of limiting access to DB1 to only the resources within VNET1. By configuring DB1's firewall or access control settings to accept connections only originating from the VNET1 subnet, you ensure that only VM1 and VM2 (which reside in VNET1) can connect. This method is more secure and granular than relying on NSGs or IP ranges, which can be more complex to manage and maintain as your network evolves. Why other options are incorrect: * **A. For NSG1, configure a rule that has a service tag:** While NSGs are useful for controlling network traffic, using service tags wouldn't specifically restrict access to just VM1 and VM2. Service tags represent a group of IP addresses for Azure services, not individual VMs. * **B. Add the IP address range of VNET1 to the Firewall settings of DB1:** This is less secure than option D. All VMs within VNET1 would have access, not just VM1 and VM2. It also makes the configuration vulnerable to changes in IP addressing within VNET1. * **C. Create an application security group:** Application security groups are helpful for managing access to Azure resources, but they aren't the most direct solution in this scenario. They provide additional control, but configuring DB1 to allow access from only the VNET is more efficient for this specific requirement. Note: The discussion provided minimal information and only one user voted for answer D. There is not sufficient information within the provided context to thoroughly evaluate the validity of the other options.
480
[View Question](https://www.examtopics.com/discussions/databricks/view/94730-exam-az-500-topic-5-question-65-discussion/) You have an Azure AD tenant that contains a user named User1. You purchase an app named App1. User1 needs to publish App1 by using Azure AD Application Proxy. Which role should you assign to User1? A. Cloud application administrator B. Application administrator C. Hybrid identity administrator D. Cloud App Security Administrator
B. Application administrator The correct answer is B because, as confirmed by the provided Microsoft documentation and user insights, the Application Administrator role has the necessary permissions to create, manage, and publish Azure AD applications, including publishing applications using Azure AD Application Proxy. The discussion highlights that while the Cloud Application Administrator has similar capabilities, it specifically lacks access to Application Proxy settings. WHY OTHER OPTIONS ARE INCORRECT: * **A. Cloud application administrator:** While this role allows management of many Azure AD applications, it does *not* grant the permission to publish apps via Application Proxy, as explicitly noted in the discussion. * **C. Hybrid identity administrator:** This role focuses on managing hybrid identities, not directly on application publishing via Application Proxy. * **D. Cloud App Security Administrator:** This role is concerned with security aspects of cloud applications, not with their publishing or management through Application Proxy. NOTE: The discussion unanimously agrees on answer B.
481
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94751-exam-az-500-topic-5-question-55-discussion/) You have an Azure subscription that contains an Azure SQL Database logic server named SQL1 and an Azure virtual machine named VM1. VM1 uses a private IP address only. The Firewall and virtual networks settings for SQL1 are shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image592.png) You need to ensure that VM1 can connect to SQL1. The solution must use the principle of least privilege. What should you do? A. Set Connection Policy to Proxy. B. Set Allow Azure services and resources to access this server to Yes. C. Add an existing virtual network. D. Create a new firewall rule. **
** C. Add an existing virtual network. The correct answer is C because VM1 uses a private IP address, and directly adding a firewall rule (D) with a private IP address will not work. Creating a new firewall rule (D) would require a public IP address, which VM1 does not possess. Setting the connection policy to Proxy (A) is not the least privilege approach and might not solve the underlying connectivity issue. Allowing all Azure services and resources (B) is far too broad and violates the principle of least privilege. Adding the virtual network (C) that VM1 resides in to the SQL server's allowed networks is the most secure and efficient solution as it restricts access only to the specific subnet where VM1 exists. This approach avoids the management complexities and cost of static IP addresses. **Why other options are incorrect:** * **A. Set Connection Policy to Proxy:** This doesn't address the core issue of network connectivity between the VM and the SQL server. It changes how the connection is managed but doesn't grant access. * **B. Set Allow Azure services and resources to access this server to Yes:** This is a massive security risk. It opens the SQL server to all Azure services, not just VM1. It completely violates the principle of least privilege. * **D. Create a new firewall rule:** This would only work if VM1 had a public IP address. Since it only has a private IP, this option is not feasible. **Note:** The discussion shows a strong consensus on answer C as the correct solution.
482
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94753-exam-az-500-topic-5-question-58-discussion/) You have an Azure subscription that contains the key vaults shown in the following table. ![Image](https://img.examtopics.com/az-500/image594.png) **(Table showing Key Vault names, Soft delete enabled (yes/no), and Purge protection enabled (yes/no))** The subscription contains the users shown in the following table. ![Image](https://img.examtopics.com/az-500/image595.png) **(Table showing User names and their roles on different Key Vaults)** On June 1, you perform the following actions: • Delete a key named key1 from KeyVault1. • Delete a secret named secret1 from KeyVault2. For each of the following statements, select Yes If the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image596.png) **(Table with statements: 1. On June 5, Admin1 can recover key1 from KeyVault1. 2. On June 12, User1 can purge secret1 from KeyVault2. 3. On June 17, Admin1 can recover key1 from KeyVault1.)** **
** The correct answer is Yes, No, No. * **Statement 1 (June 5, Admin1 can recover key1 from KeyVault1): Yes.** KeyVault1 has soft delete enabled with a retention period. Admin1 has the "Key Vault Contributor" role on KeyVault1, granting the necessary permissions to recover the deleted key within the retention period (10 days in this case). Since key1 was deleted on June 1st, it can be recovered until June 11th. Therefore, Admin1 can recover it on June 5th. * **Statement 2 (June 12, User1 can purge secret1 from KeyVault2): No.** While User1 might have permissions to manage KeyVault2, purging a deleted secret requires additional permissions beyond basic management. The discussion highlights that purge protection is separate from RBAC permissions. Simply having access to the Key Vault doesn't grant the ability to purge. The text states that secret1 was deleted on June 1st, and there is no mention of User1 having the explicit permission to perform a purge operation. * **Statement 3 (June 17, Admin1 can recover key1 from KeyVault1): No.** As explained above, the soft delete retention period for KeyVault1 is 10 days. After June 11th, key1 is permanently deleted and cannot be recovered. **Why other options are incorrect:** The discussion shows some disagreement on the second statement. One user initially believed it to be "No," but another user correctly pointed out that the RBAC permissions are separate from the purge protection and therefore the "No" answer is correct. There is no disagreement on the other answers.
483
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94756-exam-az-500-topic-4-question-86-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com that has Azure Active Directory Premium Plan 1 licenses. You need to create a group named Group1 that will be assigned the Global reader role. Which portal should you use to create Group1, and which type of group should you create? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. (Image shows a multiple choice question with options for portal selection: Azure portal, Azure AD and M365 admin center; and group type selection: Security group, Microsoft 365 group) **
** **Which Portal:** Azure AD and M365 Admin Center (now Microsoft Entra Admin Center). Both portals allow for the creation of groups that can be assigned roles within Azure AD. **Group type:** Security group. Security groups are the appropriate choice for assigning roles like the Global reader role within Azure AD. Microsoft 365 groups are primarily for collaboration and don't directly map to Azure AD role assignments in the same way. **Why other options are incorrect:** * **Azure portal only:** While the Azure portal *can* indirectly manage Azure AD objects, it's not the primary or most direct interface for group creation and role assignment in Azure AD. The Azure AD and M365 Admin Center (or Microsoft Entra Admin Center) provides a more dedicated and streamlined experience for this task. * **Microsoft 365 group:** While Microsoft 365 groups have some overlap with Azure AD, using a Microsoft 365 group wouldn't be the best practice for assigning Azure AD roles. Security groups are specifically designed for managing access control within Azure AD. **Note:** The discussion highlights that the Azure AD admin center is now called Microsoft Entra Admin Center. Both the old and new names are valid answers based on the provided information, acknowledging the change in naming over time.
484
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94761-exam-az-500-topic-2-question-81-discussion/) You have an Azure subscription linked to an Azure Active Directory Premium Plan 1 tenant. You plan to implement Azure Active Directory (Azure AD) Identity Protection. You need to ensure that you can configure a user risk policy and a sign-in risk policy. What should you do first? A. Purchase Azure Active Directory Premium Plan 2 licenses for all users. B. Register all users for Azure Multi-Factor Authentication (MFA). C. Enable security defaults for Azure Active Directory. D. Enable enhanced security features in Microsoft Defender for Cloud. **
** A. Purchase Azure Active Directory Premium Plan 2 licenses for all users. **Explanation:** The discussion overwhelmingly supports option A. Azure AD Identity Protection, which is required to configure user and sign-in risk policies, is only available with Azure AD Premium P2 licenses. Premium P1 does not include this feature. Therefore, purchasing Premium P2 licenses is the necessary first step. **Why other options are incorrect:** * **B. Register all users for Azure Multi-Factor Authentication (MFA):** While MFA is a good security practice, it's not a prerequisite for configuring risk policies within Identity Protection. Identity Protection's features are independent of MFA registration. * **C. Enable security defaults for Azure Active Directory:** Security defaults provide basic security features, but they don't include the full functionality of Identity Protection. Identity Protection is a more advanced feature requiring a Premium P2 license. * **D. Enable enhanced security features in Microsoft Defender for Cloud:** Microsoft Defender for Cloud focuses on securing cloud resources and is not directly related to managing user and sign-in risks within Azure AD. **Note:** The discussion shows a unanimous agreement on option A being the correct answer.
485
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94765-exam-az-500-topic-2-question-86-discussion/) You have the role assignments shown in the following exhibit. (Image shows a table with users and their roles assigned to Resource Group 1 (RG1) and VM1). Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. (Image shows two dropdowns: "Can delete VM1:" and "Can create a new resource group:") **
** The provided text and discussion do not give the full context of the image depicting the role assignments. Therefore, a definitive answer to which users can delete VM1 and create a new resource group cannot be provided. The discussion highlights disagreement on the interpretation of the image and roles. AzureJobsTillRetire suggests the question is incomplete based on the visible information, while Nhadipour provides a possible answer, but it is not verifiable without seeing the complete image data. The suggested answer image (image552.png) is also missing from the provided text, hindering a definitive response. **WHY OTHER OPTIONS ARE INCORRECT (Cannot be determined):** Due to the incomplete information, no specific incorrect options can be identified and refuted. The validity of any potential answer depends entirely on the content of the missing image showing the complete role assignments. Any answer given without the full image would be speculative.
486
[View Question](https://www.examtopics.com/discussions/databricks/view/94768-exam-az-500-topic-2-question-88-discussion/) You have an Azure subscription that contains a resource group named RG1. RG1 contains a virtual machine named VM1 that uses Azure Active Directory (Azure AD) authentication. You have two custom Azure roles named Role1 and Role2 that are scoped to RG1. The permissions for Role1 are shown in the following JSON code: [Image of JSON for Role1]. The permissions for Role2 are shown in the following JSON code: [Image of JSON for Role2]. You assign the roles to the users shown in the following table: [Image of user assignments]. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. [Image showing statements: "User1 can delete VM1", "User2 can delete VM1", "User3 can restart VM1"].
The correct answers are No, Yes, No. * **User1 can delete VM1: No.** User1 only has Role1 assigned. The JSON for Role1 does not include the "Microsoft.Compute/virtualMachines/delete" action, therefore User1 lacks the necessary permissions to delete VM1. * **User2 can delete VM1: Yes.** User2 has both Role1 and Role2 assigned. While Role1 doesn't grant delete permissions, Role2 *does* include "Microsoft.Compute/virtualMachines/delete". The cumulative effect of the two roles grants User2 the ability to delete VM1. * **User3 can restart VM1: No.** User3 only has Role1 assigned, which lacks the necessary permission ("Microsoft.Compute/virtualMachines/restart") to restart VM1. The provided discussion shows some disagreement on the answers. One user initially suggested No, Yes, No, which aligns with the final correct answer. The explanation in the discussion correctly identifies the key permissions within the roles and how they apply to each user.
487
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94770-exam-az-500-topic-2-question-89-discussion/) You have an Azure subscription that contains the resources shown in the following table. | Resource | Type | |--------------|-------------------| | VM1 | Virtual Machine | | VM2 | Virtual Machine | | App1 | Application | | Vault1 | Key Vault | | st1 | Storage Account | You plan to perform the following actions: * Deploy a new app named App1 that will require access to Vault1. * Configure a shared identity for VM1 and VM2 to access st1. You need to configure identities for each requirement. The solution must minimize administrative effort. Which type of identity should you configure for each requirement? **
** * **App1 access to Vault1:** System-assigned managed identity. * **VM1 and VM2 access to st1:** User-assigned managed identity. **Explanation:** The question emphasizes minimizing administrative effort. A system-assigned managed identity is automatically created and managed by Azure with the resource (App1), making it the most efficient choice for App1's access to Vault1. A user-assigned managed identity provides a reusable identity that can be assigned to multiple resources (VM1 and VM2), allowing them to share the same identity and access to st1 while minimizing the management overhead compared to creating separate identities for each VM. **Why other options are incorrect:** Using a user-assigned managed identity for App1 would require manual creation and management, thus increasing administrative overhead. Using system-assigned managed identities for VM1 and VM2 would necessitate creating a separate identity for each VM, again increasing administrative overhead. **Note:** The discussion shows some disagreement on the best approach for App1's identity. Some users advocate for system-assigned for simplicity, and there may be scenarios where a user-assigned identity for App1 could be appropriate depending on more complex management needs or lifecycle considerations not detailed in the question. However, based purely on the information given and the emphasis on minimizing administrative effort, the system-assigned identity for App1 is the most efficient solution.
488
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94775-exam-az-500-topic-2-question-93-discussion/) HOTSPOT You have an Azure subscription that contains a storage account named contoso2023. You need to perform the following tasks: • Verify that identity-based authentication over SMB is enabled. • Only grant users access to contoso2023 in the year 2023. Which two settings should you use? To answer, select the appropriate settings in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image563.png) (Image shows a multiple choice question with options including "File Shares", "Share access signature", and other irrelevant options) ![Image](https://img.examtopics.com/az-500/image564.png) (Image shows the suggested answer: "File Shares" and "Share access signature" selected) **
** The correct answer is **File Shares** and **Share access signature**. * **File Shares:** This option allows for identity-based authentication over SMB, fulfilling the first requirement. The discussion confirms this is the correct setting to configure Active Directory authentication for Azure file shares. * **Share access signature (SAS):** This option enables granting temporary, fine-grained access to the storage account. This allows you to restrict access to the contoso2023 storage account to only the year 2023 by generating SAS tokens with appropriate expiry times. **Why other options are incorrect:** The question only requires two answers; the provided images and discussion do not offer enough information to determine other potential options. **Note:** The discussion reveals some disagreement. Andy_S points out that File Shares (SMB) doesn't directly support SAS authentication, suggesting a potential flaw in the question's design or interpretation. However, the majority consensus in the discussion, along with the suggested answer, supports using File Shares for SMB authentication and SAS for time-limited access. The answer provided reflects the dominant viewpoint in the discussion, acknowledging the expressed conflicting opinion.
489
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94831-exam-az-500-topic-4-question-89-discussion/) You have an Azure subscription that contains the following Azure firewall: • Name: Fw1 • Azure region: UK West • Private IP address: 10.1.3.4 • Public IP address: 23.236.62.147 The subscription contains the virtual networks shown in the following table. (Image shows VNET1, VNET2, VNET3 with associated address spaces) The subscription contains the subnets shown in the following table. (Image shows Subnet1-1, Subnet1-2, Subnet2-1, Subnet3-1 with associated address spaces and VNET association) The subscription contains the routes shown in the following table. (Image shows Route Table 1, Route Table 2, Route Table 3 with associated Next Hop Type, Address Prefix, Next Hop IP address) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image shows three statements): 1. Traffic from Subnet1-1 to Subnet1-2 will be routed through Fw1. 2. Traffic from Subnet2-1 to Subnet1-1 will be routed through Fw1. 3. Traffic from Subnet3-1 to the internet will be routed through Fw1. **
** 1. **No.** Traffic from Subnet1-1 to Subnet1-2 will *not* be routed through Fw1. Subnet1-1 and Subnet1-2 are in the same VNET (VNET1). There's a more specific route within VNET1 itself (Next Hop Type: Virtual Network) that will be preferred over the default route to the firewall. Therefore, traffic will bypass the Firewall. 2. **Yes.** Traffic from Subnet2-1 to Subnet1-1 *will* be routed through Fw1. Route Table 3 directs traffic from Subnet2-1 to 10.1.1.0/24 (Subnet1-1's address space) with a Next Hop IP address of 10.1.3.4 (Fw1). 3. **No.** Traffic from Subnet3-1 to the internet will *not* be routed through Fw1. Subnet3-1 has no route defined that goes through the firewall. It will use the default internet gateway. **Explanation of why other options are incorrect:** There is disagreement in the discussion regarding statement 1. While one user believes the default route to the FW will be used, the other user correctly explains that the more specific route within VNET1 will override the default route, causing the traffic to bypass the firewall. The explanation given in the answer aligns with the more accurate explanation provided in the discussion. The other statements are clear-cut based on the route table configurations.
490
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94882-exam-az-500-topic-5-question-63-discussion/) Your on-premises network contains the servers shown in the following table. ![Image](https://img.examtopics.com/az-500/image609.png) You have an Azure subscription that contains multiple virtual machines that run either Windows Server 2019 or SLES. You plan to implement adaptive application controls in Microsoft Defender for Cloud. Which operating systems and platforms can you monitor? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image610.png) **
** The correct answer is SLES and Windows Server operating systems, and Azure Arc-enabled servers and Azure virtual machines. Microsoft Defender for Cloud's adaptive application controls support both Windows and Linux operating systems, and can monitor both Azure VMs and non-Azure machines (including those joined via Azure Arc). The provided images and discussion confirm this. While there is some debate in the discussion regarding the exact scope of "non-Azure machines" (e.g., whether Hyper-V servers are included), the core documentation referenced points to support for both Azure and non-Azure machines running Windows and Linux. **Why other options are incorrect:** Any option that limits the operating systems to only one (Windows or SLES) or limits the platforms to only Azure VMs or only Azure Arc-enabled servers would be incomplete and therefore incorrect. The discussion highlights the confusion, but the definitive answer comes from the official Microsoft documentation referenced by several commenters. **Note:** The discussion shows some disagreement about the precise interpretation of "non-Azure machines," with some believing it includes only Azure Arc-enabled servers and others believing it includes all on-premises servers. The provided answer reflects the broader support indicated in the official documentation which covers both Azure and non-Azure machines.
491
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94895-exam-az-500-topic-5-question-61-discussion/) HOTSPOT You have an Azure SQL database named DB1 that contains a table named Tablet. You need to configure DB1 to meet the following requirements: • Sensitive data in Table1 must be identified automatically. • Only the first character and last character of the sensitive data must be displayed in query results. Which two features should you configure? To answer, select the features in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image603.png) **
** The correct answer is **Data Discovery and Classification** and **Dynamic Data Masking**. * **Data Discovery and Classification:** This feature automatically identifies sensitive data within your database, fulfilling the first requirement. The image provided in the question shows this as a selectable option. * **Dynamic Data Masking:** This feature allows you to control the visibility of sensitive data in query results. You can configure it to only show the first and last characters, thus satisfying the second requirement. The image provided in the question shows this as a selectable option. **Why other options are incorrect:** The question doesn't provide other options, so there are no other options to analyze for correctness. The discussion section confirms the above two features as the correct solution. **Note:** The provided discussion shows a unanimous agreement among users on the correct answer.
492
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94978-exam-az-500-topic-3-question-63-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-500/image565.png) VNET1, VNET2, and VNET3 are peered with each other. You perform the following actions: • Create two application security groups named ASG1 and ASG2 in the West US region. • Add the network interface of VM1 to ASG1. The network interfaces of which virtual machines can you add to ASG1 and ASG2? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image566.png) **
** * **ASG1:** VM2 only. * **ASG2:** VM1, VM2, and VM4. **Explanation:** Application Security Groups (ASGs) in Azure have limitations regarding virtual machine (VM) membership. Crucially, once a VM is added to an ASG, subsequent VMs added to that same ASG *must* reside in the same VNet as the first VM. ASG1, after VM1 (in VNET1) is added, can only accept VMs also in VNET1. Therefore, only VM2 can be added to ASG1. ASG2, however, is initially empty. It can contain VMs from any VNet since no VMs have yet been assigned to it. Hence, VM1, VM2, and VM4 can all be added to ASG2 (independently or together). The discussion highlights some disagreement on the exact answer for ASG2. Some users suggest only VM2 and VM4. While VM peering allows communication between VNets, it does not remove the limitation that once an ASG has a VM assigned to it, all subsequent VMs must reside on the same VNET as the first. The suggested answer (VM1, VM2 and VM4 for ASG2) is correct because ASG2 is initially empty. **Why other options are incorrect:** Adding VMs from different VNets to ASG1 after VM1 has been added is incorrect because ASG1 is then restricted to the original VNet's VMs. The initial assignment of VM1 to ASG1 limits subsequent VMs added to the same ASG. The claim that only VM2 and VM4 can be added to ASG2 is incorrect due to the fact that ASG2 does not have any limitations since it starts empty. Any VMs can be added initially to a new ASG and they do not have to be in the same VNET.
493
**** [View Question](https://www.examtopics.com/discussions/databricks/view/94987-exam-az-500-topic-5-question-62-discussion/) You have an Azure subscription that contains two users named User1 and User2 and the blob containers shown in the following table. ![Image](https://img.examtopics.com/az-500/image604.png) Policy1 is configured as shown in the following exhibit. ![Image](https://img.examtopics.com/az-500/image605.png) You assign the roles for storage1 as shown in the following table. ![Image](https://img.examtopics.com/az-500/image606.png) The storage1 account has the following shared access signature (SAS) named SAS1: • Allowed services: Blob • Allowed resource types: Container • Allowed permissions: Read, Write, List, Add, Create • Blob versioning permissions: enables deletion of versions • Allowed blob index permissions: Read/Write • Starr and expiry date/time: o Start: 12/1/2021 o End: 12/31/2021 For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image607.png) **
** The correct answer is Y, Y, N. * **Statement 1 (User1 can write to container2 on 12/15/2021): Yes.** The SAS token SAS1 grants write access to containers within storage1, and it's valid on 12/15/2021. Container2 is in storage1. Therefore, User1 can write to container2. The access policy on Container2 is irrelevant because the SAS token overrides any less-permissive settings. * **Statement 2 (User2 can write to container1 on 12/15/2021): Yes.** Similar to Statement 1, SAS1 grants write access and is valid. Regardless of any settings on container1 (which are not explicitly detailed), the SAS token's permissions apply. * **Statement 3 (User1 can read from container2 on 1/10/2022): No.** The SAS token SAS1 expired on 12/31/2021. Therefore, any access attempts using this SAS token after that date will fail. **Why other options are incorrect:** The discussion shows some disagreement on the precise interaction between SAS tokens and access policies. However, the consensus seems to be that the most permissive setting between an Access Policy and a SAS token prevails. In this case, the SAS token grants broader access than any implied access policy from the images. The expiration date of the SAS token is a critical factor not to be overlooked.
494
**** [View Question](https://www.examtopics.com/discussions/databricks/view/95034-exam-az-500-topic-2-question-83-discussion/) You have an Azure Active Directory tenant that syncs with an Active Directory Domain Services (AD DS) domain. You plan to create an Azure file share that will contain folders and files. Which identity store can you use to assign permissions to the Azure file share and folders within the share? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-500/image548.png) (Image shows a multiple choice question with boxes for selecting answers. The specific choices are not visible in the provided text.) ![Image](https://img.examtopics.com/az-500/image549.png) (Image shows the suggested answers, which are also not visible.) **
** Based on the provided discussion, the correct identity stores for assigning permissions to an Azure file share are **On-premises Active Directory Domain Services (AD DS)** and **Azure Active Directory (Azure AD)**. **Explanation:** The discussion highlights that Azure Files supports identity-based authentication via Kerberos using on-premises AD DS, Azure AD DS, and Azure AD Kerberos for hybrid identities. While Azure AD DS is mentioned, the key point is that Azure Files leverages Azure AD for share-level permissions and Windows ACLs for directory and file-level permissions, particularly when there is synchronization between on-premises AD and Azure AD. Therefore, both on-premises AD DS (through synchronization) and Azure AD are valid options for managing permissions. **Why other options are incorrect (if applicable):** The provided text does not offer any other explicit options to choose from, so we can't assess why they would be incorrect. The missing image content showing the available options in the multiple choice question is crucial for a complete answer. **Note:** There is some disagreement in the discussion regarding the precise method and implications of synchronization between on-premises AD DS and Azure AD in relation to permission management. The answer above reflects the most consistent and widely supported interpretation from the given discussion.
495
**** [View Question](https://www.examtopics.com/discussions/databricks/view/95035-exam-az-500-topic-2-question-84-discussion/) You have an Azure subscription. You plan to deploy a new Conditional Access policy named CAPolicy1. You need to use the What if tool to evaluate how CAPolicy1 will affect users. The solution must minimize the impact of CAPolicy1 on the users. To what should you set the Enable policy setting for CAPolicy1? A. Off B. On C. Report only **
** C. Report only **Explanation:** Setting the "Enable policy" setting to "Report only" allows you to simulate the Conditional Access policy (CAPolicy1) without actually enforcing it. The What If tool will then generate reports showing how the policy *would* have affected users if it were enabled. This minimizes the impact on users while providing valuable information for assessment before full implementation. **Why other options are incorrect:** * **A. Off:** Setting the policy to "Off" would prevent any evaluation; the What If tool wouldn't provide any data on potential user impact. * **B. On:** Enabling the policy immediately enforces it, impacting users directly. This contradicts the requirement to minimize impact during evaluation. **Note:** The discussion shows unanimous agreement on the correct answer.