Practice Questions - Microsoft AZ-500 Flashcards
(495 cards)
HOTSPOT -
What is the membership of Group1 and Group2? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
(Image shows a screenshot of a multiple choice question with two boxes: Box 1: Group1 (rule: user.displayName -contains "ON"
) and Box 2: Group2 (rule: user.displayName -match "*on"
) )
Scenario:
Contoso.com contains the users shown in the following table.
(Image shows a table of users with their display names: User1: Montreal, User2: MONTREAL, User3: London, User4: Ontario)
Contoso.com contains the security groups shown in the following table.
(Image shows a table of groups, Group1 and Group2 with their membership rules)
**
**
-
Group 1: User1, User2, User3, and User4. The rule
user.displayName -contains "ON"
is case-insensitive and checks if the substring “ON” exists within the displayName. All four users’ display names contain this substring. -
Group 2: User3. The rule
user.displayName -match "*on"
is intended to find display names ending with “on”. However, there’s a debate surrounding the correctness of this expression. Some argue that*on
is an incomplete regex and that a correct regex should be.*on
. Only “London” (User3) ends with “on”.
Explanation of other options and the disagreement:
The discussion highlights a significant disagreement regarding the validity of the regular expression "*on"
in Group 2’s membership rule. Many commenters correctly point out that the *
wildcard only matches the preceding character, zero or more times. This means it wouldn’t match any text ending in “on”. A correctly functioning regular expression would be ".*on"
, using the .
wildcard to match any character (zero or more times) before “on”. Because the question uses "*on"
, and many commenters agree it is a flawed expression, the answer for Group 2 is based on the literal interpretation of that flawed expression. If the expression were corrected to ".*on"
, the result for Group 2 would be the same.
The discussion shows that the provided answer is based on interpreting the question as written, even if the regex is considered faulty. A more robust question would have used a correct regular expression.
View Question
You need to ensure that the audit logs from the SQLdb1 Azure SQL database are stored in the WS12345678 Azure Log Analytics workspace. To complete this task, sign in to the Azure portal and modify the Azure resources.
To ensure audit logs from the SQLdb1 Azure SQL database are stored in the WS12345678 Azure Log Analytics workspace, follow these steps:
- Access the Azure portal and navigate to the SQLdb1 database. This can be done through searching for “SQL databases” or browsing to them in the left navigation pane.
- In SQLdb1’s properties, locate the “Security” section and select “Auditing.”
- Enable auditing if it’s not already enabled. Select the “Log Analytics” checkbox and click “Configure.”
- Choose the WS12345678 Azure Log Analytics workspace from the provided list.
- Save the changes.
While some users suggest alternative methods (auditing at the server level or using Diagnostic Settings), the question explicitly focuses on configuring auditing at the database level for SQLdb1 to send logs to the specified Log Analytics workspace. Therefore, using the database-level auditing is the most direct and appropriate method to achieve the goal as stated in the question. The provided steps directly address database-level auditing. There is a consensus among users that the database-level approach is correct for this scenario.
View Question You have an Azure subscription named Sub1. You have an Azure Storage account named sa1 in a resource group named RG1. Users and applications access the blob service and the file service in sa1 by using several shared access signatures (SASs) and stored access policies. You discover that unauthorized users accessed both the file service and the blob service. You need to revoke all access to sa1. Solution: You generate new SASs. Does this meet the goal?
A. Yes
B. No
B. No. Generating new SASs does not revoke access granted by existing SASs. The existing SAS URLs will continue to function, allowing unauthorized access. To revoke access, you need to delete the existing SASs or regenerate the storage account keys used to create them. This will invalidate all SASs created with the old keys.
The overwhelming consensus in the discussion supports answer B. While the question presents a solution of generating new SASs, the discussion clearly points out that this alone is insufficient to revoke existing access.
You have an Azure subscription that contains a resource group named RG1 and the network security groups (NSGs) shown in the following table.
You create and assign the Azure policy shown in the following exhibit.
What is the flow log status of NSG1 and NSG2 after the Azure policy is assigned?
A. Flow logs will be enabled for NSG1 only.
B. Flow logs will be enabled for NSG2 only.
C. Flow logs will be enabled for NSG1 and NSG2.
D. Flow logs will be disabled for NSG1 and NSG2.
**
** D. Flow logs will be disabled for NSG1 and NSG2.
The Azure policy shown in the image has an effect of “Audit”. An audit effect only logs a warning if a resource is non-compliant; it does not automatically change the resource’s configuration. Since the policy is in audit mode and there’s no remediation task to enable flow logs, the flow log status for both NSG1 and NSG2 remains unchanged. The discussion confirms this understanding.
WHY OTHER OPTIONS ARE INCORRECT:
- A. Flow logs will be enabled for NSG1 only: Incorrect because the policy targets NSG2 and excludes NSG1. Even if the effect were “DeployIfNotExists”, NSG1 would remain unaffected.
- B. Flow logs will be enabled for NSG2 only: Incorrect because the policy’s effect is “Audit,” meaning it only monitors compliance and doesn’t enforce changes.
- C. Flow logs will be enabled for NSG1 and NSG2: Incorrect for the same reason as option B; the audit effect does not automatically enable flow logs.
NOTE: The consensus in the discussion points to answer D. There is no dissenting opinion presented.
** View Question Your on-premises network contains a Hyper-V virtual machine named VM1. You need to use Azure Arc to onboard VM1 to Microsoft Defender for Cloud. What should you install first?
A. the guest configuration agent
B. the Azure Monitor agent
C. the Log Analytics agent
D. the Azure Connected Machine agent
**
** D. the Azure Connected Machine agent
Explanation: The Azure Connected Machine agent is the correct answer because it’s the prerequisite for onboarding an on-premises machine to Azure Arc. Azure Arc enables the management and governance of on-premises machines from Azure. Only after the Azure Connected Machine agent is installed can the machine connect to Azure Arc and subsequently be integrated with Microsoft Defender for Cloud. The other agents are not directly involved in the initial onboarding process to Azure Arc.
Why other options are incorrect:
- A. the guest configuration agent: While used for managing configurations on VMs, it’s not the primary agent for connecting to Azure Arc.
- B. the Azure Monitor agent: This agent is for collecting monitoring data, not for initially connecting the machine to Azure Arc.
- C. the Log Analytics agent: While Log Analytics is used for data collection and is relevant to Defender for Cloud, the machine must first be connected via the Azure Connected Machine agent before Log Analytics data can be sent. One user suggested this as the answer, but the consensus from the discussion strongly favors D.
Note: There is a disagreement among users regarding option C, with one user suggesting that Log Analytics agent is the correct answer. However, the majority of the discussion and the suggested answer strongly point towards option D as the correct choice.
View Question You have an Azure subscription that contains a Microsoft Defender External Attack Surface Management (Defender EASM) resource named EASM1. EASM1 has discovery enabled and contains several inventory assets. You need to identify which inventory assets are vulnerable to the most critical web app security risks. Which Defender EASM dashboard should you use?
A. Security Posture
B. OWASP Top 10
C. Attack Surface Summary
D. GDPR Compliance
The correct answer is B. OWASP Top 10. The OWASP Top 10 dashboard in Defender EASM specifically focuses on the most critical web application security risks as defined by the Open Web Application Security Project (OWASP). This aligns directly with the question’s requirement to identify assets vulnerable to these risks.
Why other options are incorrect:
- A. Security Posture: This dashboard provides a general overview of the security posture of your assets, not specifically focusing on web application vulnerabilities.
- C. Attack Surface Summary: This gives a broad summary of your attack surface, but doesn’t prioritize risks based on the OWASP Top 10.
- D. GDPR Compliance: This dashboard is unrelated to web application security risks.
Note: The discussion shows a strong consensus that the answer is B, with multiple users selecting and explaining this choice.
View Question You have an Azure subscription that uses Microsoft Defender for Cloud. You need to use Defender for Cloud to review regulatory compliance with the Azure CIS 1.4.0 standard. The solution must minimize administrative effort. What should you do first?
A. Assign an Azure policy.
B. Disable one of the Out of the box standards.
C. Manually add the Azure CIS 1.4.0 standard.
D. Add a custom initiative.
C. Manually add the Azure CIS 1.4.0 standard.
The Azure CIS 1.4.0 standard is not included by default in Microsoft Defender for Cloud’s regulatory compliance offerings. Before you can assign a policy or take any other action related to this standard, you must first add it. The other options are incorrect because they presume the standard is already present and configured. Assigning a policy (A) or disabling an existing standard (B) are actions that come after adding the desired standard. Creating a custom initiative (D) is unnecessary given that the Azure CIS 1.4.0 standard is readily available.
The discussion shows some disagreement on the exact terminology (“security policy” vs. “Azure policy”) but the core consensus points to the need to add the standard before any assignment or configuration.
View Question You have an Azure subscription that contains an Azure key vault named Vault1 and a virtual machine named VM1. VM1 is connected to a virtual network named VNet1. You need to allow access to Vault1 only from VM1. What should you do in the Networking settings of Vault1?
A. From the Firewalls and virtual networks tab, add the IP address of VM1.
B. From the Private endpoint connections tab, create a private endpoint for VM1.
C. From the Firewalls and virtual networks tab, add VNet1.
D. From the Firewalls and virtual networks tab, set Allow trusted Microsoft services to bypass this firewall to Yes for Vault1.
The correct answer is A. From the Firewalls and virtual networks tab, add the IP address of VM1.
This is the most precise and secure way to limit access to Vault1 only from VM1. By adding the specific IP address of VM1 to the firewall rules, only that machine will be able to connect.
Option B is incorrect because while private endpoints offer enhanced security, the question specifically asks what to do within Vault1’s networking settings. Creating a private endpoint involves configuring the virtual network and not directly within the Key Vault’s settings. Furthermore, the discussion highlights that there isn’t a VM option directly available during private endpoint creation.
Option C is incorrect because adding VNet1 would grant access to all resources within that virtual network, not just VM1, violating the requirement of only allowing VM1 access.
Option D is incorrect because enabling “Allow trusted Microsoft services to bypass this firewall” is a broad permission that would not restrict access to only VM1 and potentially exposes the Key Vault unnecessarily.
There is some disagreement in the discussion regarding the best approach. Some users suggest using a Virtual Network (option C) if the VM isn’t publicly accessible, but the consensus and the suggested answer favor option A for its specificity and security.
** View Question You have an Azure subscription. You create a new virtual network named VNet1. You plan to deploy an Azure web app named App1 that will use VNet1 and will be reachable by using private IP addresses. The solution must support inbound and outbound network traffic. What should you do?
A. Create an Azure App Service Hybrid Connection.
B. Create an Azure application gateway.
C. Create an App Service Environment.
D. Configure regional virtual network integration.
**
** C. Create an App Service Environment.
An App Service Environment (ASE) provides a fully isolated and scalable environment within a customer’s virtual network. This allows web apps deployed within the ASE to use private IP addresses and have both inbound and outbound network traffic supported. This directly addresses the requirement of using VNet1 and being reachable via private IP addresses while maintaining network connectivity.
Why other options are incorrect:
- A. Create an Azure App Service Hybrid Connection: Hybrid connections are used to connect to on-premises resources, not to integrate with a VNet for private IP address access.
- B. Create an Azure application gateway: Application gateways manage traffic to multiple web apps, but they don’t inherently provide the private IP address access within a VNet required by the question.
- D. Configure regional virtual network integration: Regional VNet integration might offer connectivity between VNets, but it doesn’t directly address the need for a web app to use private IP addresses within a specific VNet (VNet1).
Note: The discussion shows some disagreement among users regarding the best answer, with some initially suggesting option D. However, the consensus and provided explanations ultimately favor option C as the most appropriate solution for the described scenario.
View Question You have an Azure subscription that contains a user named User1. You need to ensure that User1 can perform the following tasks: • Create groups. • Create access reviews for role-assignable groups. • Assign Azure AD roles to groups. The solution must use the principle of least privilege. Which role should you assign to User1?
A. Groups administrator
B. Authentication administrator
C. Identity Governance Administrator
D. Privileged role administrator
The correct answer is D. Privileged role administrator.
The Privileged Role Administrator role allows the user to perform all three tasks: creating groups, creating access reviews for role-assignable groups, and assigning Azure AD roles to groups. This aligns with the principle of least privilege as it grants only the necessary permissions.
Other options are incorrect because:
- A. Groups administrator: This role primarily manages groups but does not inherently grant the ability to create access reviews or assign Azure AD roles.
- B. Authentication administrator: This role focuses on managing authentication-related settings and does not provide the necessary permissions for group management or access reviews.
- C. Identity Governance Administrator: While this role allows management of access reviews, it might not include the permission to create groups or assign all Azure AD roles. The discussion shows conflicting views on this option’s capability to handle all required tasks.
Note: There is some disagreement in the discussion regarding the capabilities of the Identity Governance Administrator role. While some users believe it can handle all the tasks, others argue it lacks some necessary permissions. The consensus leans towards Privileged Role Administrator as the most reliable and comprehensive option for fulfilling all the requirements while adhering to the principle of least privilege.
** View Question You have an Azure subscription that contains a storage account named storage1 and a virtual machine named VM1. VM1 is connected to a virtual network named VNet1 that contains one subnet and uses Azure DNS. You need to ensure that VM1 connects to storage1 by using a private IP address. The solution must minimize administrative effort. What should you do?
A. For storage1, disable public network access.
B. On VNet1, create a new subnet.
C. For storage1, create a new private endpoint.
D. Create an Azure Private DNS zone.
**
** C. For storage1, create a new private endpoint.
A private endpoint provides a private IP address within the VNet1, allowing VM1 to access storage1 without traversing the public internet. This minimizes administrative effort compared to other options.
Why other options are incorrect:
- A. For storage1, disable public network access. While disabling public access is a good security practice, it doesn’t guarantee that VM1 will use a private IP address to connect to storage1. VM1 might still use a public IP if other configurations are not properly set.
- B. On VNet1, create a new subnet. Creating a new subnet is not directly related to enabling private connectivity to storage1. It might be necessary in other network design scenarios, but it’s not the most direct solution in this case.
- D. Create an Azure Private DNS zone. While a private DNS zone is often used in conjunction with a private endpoint to resolve the private IP address of storage1, it’s not the primary solution. The private endpoint itself is the crucial component that provides the private connection. The discussion even highlights that a private endpoint needs private DNS integration for it to work properly.
Note: The discussion mentions that a private endpoint requires private DNS integration to function correctly. This is an important consideration, although the question itself doesn’t explicitly state that requirement. The optimal answer, therefore, is creating the private endpoint, and then configuring private DNS accordingly.
View Question You have an Azure subscription that contains a web app named App1. App1 provides users with product images and videos. Users access App1 by using a URL of HTTPS://app1.contoso.com. You deploy two server pools named Pool1 and Pool2. Pool1 hosts product images. Pool2 hosts product videos. You need to optimize the performance of App1. The solution must meet the following requirements: • Minimize the performance impact of TLS connections on Pool1 and Pool2. • Route user requests to the server pools based on the requested URL path. What should you include in the solution?
A. Azure Bastion
B. Azure Front Door
C. Azure Traffic Manager
D. Azure Application Gateway
The correct answer is D. Azure Application Gateway.
Azure Application Gateway offers URL-based routing, allowing you to direct requests to different backend pools (Pool1 for images, Pool2 for videos) based on the URL path. Furthermore, Application Gateway handles TLS termination at the gateway level, minimizing the performance impact of TLS handshakes on the backend servers. This offloads the TLS processing from the backend servers, improving performance.
Why other options are incorrect:
- A. Azure Bastion: Azure Bastion provides secure access to virtual machines, but it’s not relevant to optimizing web application performance or routing requests based on URL paths.
- B. Azure Front Door: While Azure Front Door can handle global load balancing and TLS termination, it’s not ideal for this scenario because it doesn’t inherently provide URL-path based routing to different backend pools in the same way Application Gateway does. It’s better suited for global distribution.
- C. Azure Traffic Manager: Azure Traffic Manager primarily handles traffic distribution based on health probes and geographic location. It doesn’t offer the fine-grained URL-path-based routing needed to separate image and video requests.
Note: The discussion highlights a preference for Azure Application Gateway over Azure Front Door for this specific scenario due to its superior TLS offloading capabilities and URL-based routing features. However, both services can handle TLS termination; the discussion emphasizes the better suitability of Application Gateway for this use case.
** View Question You have an Azure subscription named Sub1. In Microsoft Defender for Cloud, you have a workflow automation named WF1. WF1 is configured to send an email message to a user named User1. You need to modify WF1 to send email messages to a distribution group named Alerts. What should you use to modify WF1?
A. Azure Logic Apps Designer
B. Azure Application Insights
C. Azure DevOps
D. Azure Monitor
**
** A. Azure Logic Apps Designer
Explanation: The question describes a workflow automation (WF1) within Microsoft Defender for Cloud that needs modification. Workflow automations, by their nature, are best managed and modified through a workflow designer. Azure Logic Apps Designer is a visual tool specifically designed for creating and managing logic apps, which are essentially automated workflows. Therefore, it’s the appropriate tool to modify WF1 to send emails to a different recipient (the distribution group “Alerts”).
Why other options are incorrect:
- B. Azure Application Insights: This service is for monitoring and analyzing application performance, not for managing workflows.
- C. Azure DevOps: This is a platform for collaborating on software development projects, not directly related to modifying security-related workflows in Defender for Cloud.
- D. Azure Monitor: This provides monitoring and logging capabilities across Azure resources, but it does not offer a workflow designer to modify automations.
The discussion shows unanimous agreement on the correct answer.
View Question
SIMULATION
-
The developers at your company plan to create a web app named App28681041 and to publish the app to https://www.contoso.com.
You need to perform the following tasks:
• Ensure that App28681041 is registered to Azure AD.
• Generate a password for App28681041.
To complete this task, sign in to the Azure portal.
To complete the tasks, you must register the web application (App28681041) in Azure Active Directory (Azure AD) and then generate a client secret (password) for it. This involves using the Azure portal to register the application, providing necessary details like the application name and redirect URI (https://www.contoso.com in this case), and then creating a new client secret within the application’s settings. The image in the original post shows the Azure portal interface where these steps are performed.
The provided links in the discussion (https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app#register-an-application and https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app#add-a-client-secret) corroborate the correct procedure. While another link (https://learn.microsoft.com/en-us/azure/healthcare-apis/register-application) is mentioned, it’s not directly relevant to the core task of registering a generic web application in Azure AD and generating a client secret; it focuses on a more specific Healthcare APIs scenario. Multiple users in the discussion confirm the answer’s correctness.
View Question You are troubleshooting a security issue for an Azure Storage account. You enable Azure Storage Analytics logs and archive it to a storage account. What should you use to retrieve the diagnostics logs?
A. Azure Cosmos DB explorer
B. Azure Monitor
C. AzCopy
D. Microsoft Defender for Cloud
C. AzCopy
AzCopy is a command-line utility provided by Azure Storage that allows you to copy blobs (including log files) to and from a storage account. Since the Azure Storage Analytics logs are archived to a storage account, AzCopy is the appropriate tool to retrieve them.
Why other options are incorrect:
- A. Azure Cosmos DB explorer: This tool is used to manage Azure Cosmos DB databases, not Azure Storage.
- B. Azure Monitor: Azure Monitor is a monitoring service that collects and analyzes telemetry data from various Azure resources. While it can integrate with Azure Storage, it doesn’t directly retrieve the log files themselves.
- D. Microsoft Defender for Cloud: This is a security service providing threat protection and detection, not directly involved in retrieving storage logs.
Note: The discussion indicates that Azure Storage Explorer could also be used to retrieve the logs, although AzCopy is explicitly mentioned as a method in Microsoft documentation. There appears to be some disagreement on the best tool, but AzCopy is presented as a valid and widely accepted solution.
View Question You have an Azure subscription that contains a web app named App1. Users must be able to select between a Google identity or a Microsoft identity when authenticating to App1. You need to add Google as an identity provider in Azure AD. Which two pieces of information should you configure? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. a client ID
B. a tenant name
C. the endpoint URL of an application
D. a tenant ID
E. a client secret
The correct answer is A and E: a client ID and a client secret.
To add Google as an identity provider in Azure AD, you need to provide Azure AD with credentials that Google provides when you register your application with Google’s identity provider. These credentials allow Azure AD to verify the authenticity of requests coming from your application. The client ID uniquely identifies your application, while the client secret is a security key that must be kept confidential.
Options B, C, and D are incorrect. While a tenant ID (D) is related to your Azure AD setup and Google’s environment is involved in tenant names (B), they aren’t directly configured when adding Google as an identity provider to Azure. The endpoint URL (C) is relevant for the application itself, not for the integration of the identity provider.
Note: The discussion shows general agreement on the correct answer (A and E), although some users provide slightly different explanations of how to locate and use these credentials within the Azure portal.
View Question Your company has an Azure subscription named Sub1 that is associated to an Azure Active Directory (Azure AD) tenant named contoso.com. The company develops an application named App1. App1 is registered in Azure AD. You need to ensure that App1 can access secrets in Azure Key Vault on behalf of the application users. What should you configure?
A. an application permission without admin consent
B. a delegated permission without admin consent
C. a delegated permission that requires admin consent
D. an application permission that requires admin consent
The correct answer is C. a delegated permission that requires admin consent.
To allow App1 to access Azure Key Vault secrets on behalf of users, a delegated permission is required. This means the application will act on behalf of a user who has already authenticated. The “on behalf of” phrasing in the question implies that the application cannot grant this access itself; an administrator needs to explicitly grant consent. Therefore, the permission requires admin consent.
Why other options are incorrect:
- A. an application permission without admin consent: Application permissions grant the application access directly, without the need for a user to be signed in. This doesn’t fit the scenario where the application needs to access secrets on behalf of users.
- B. a delegated permission without admin consent: A delegated permission is needed, but without admin consent, the application won’t be authorized to access secrets on behalf of users. Individual users would have to consent, which isn’t what the question asks for.
- D. an application permission that requires admin consent: While admin consent is required, an application permission is not suitable here. The application needs to act on behalf of a specific user.
Note: The discussion shows a unanimous agreement on answer C.
You have an Azure AD tenant that contains the users shown in the following table.
You enable passwordless authentication for the tenant. Which authentication method can each user use for passwordless authentication? To answer, drag the appropriate authentication methods to the correct users. Each authentication method may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
**
** The provided images show a drag-and-drop question. The correct answer, according to the “Suggested Answer” image (https://img.examtopics.com/az-500/image624.png), would map the users to the authentication methods as follows:
- User A (Assigned Windows 10 device): Windows Hello for Business and/or FIDO2 security key. This is consistent with Microsoft documentation that states users with assigned Windows 10 devices can utilize these passwordless methods.
- User B (No assigned Windows 10 device, registered mobile authenticator): Microsoft Authenticator app. This is because the mobile authenticator app is specifically designed for passwordless authentication on devices without Windows Hello for Business capabilities.
- User C (No assigned Windows 10 device, no registered mobile authenticator): None. This user lacks the prerequisites for any of the listed passwordless options.
Why other options are incorrect: The question is a drag-and-drop, and the provided suggested answer represents the only correct mapping of users to available passwordless authentication methods based on the given information. Any other mapping would be incorrect based on the conditions for each authentication method.
Note: There is no explicit disagreement or conflicting opinions visible within the provided discussion. The discussion primarily points to relevant Microsoft documentation supporting the suggested answer.
You have an Azure AD tenant and an application named App1. You need to ensure that App1 can use Microsoft Entra Verified ID to verify credentials. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. (Image shows a drag-and-drop interface with options including: Create an Azure Key Vault instance; Configure the Verified ID service using the manual setup; Register an application in Microsoft Entra ID; Other options are not fully visible in the provided image)
**
** The correct sequence of actions is:
- Register an application in Microsoft Entra ID: This is the foundational step. Before you can use Microsoft Entra Verified ID, the application needs to be registered within Azure AD.
- Create an Azure Key Vault instance: A Key Vault is needed to securely store the cryptographic keys used by Verified ID.
- Configure the Verified ID service using the manual setup: This final step activates and configures the Verified ID service within your tenant, linking it to the registered application and Key Vault.
The order is crucial. You cannot configure the Verified ID service without a registered application and a Key Vault to store the keys.
Why other options are incorrect (if applicable): The provided discussion doesn’t list other specific incorrect options, but any sequence differing from the above would be incorrect because it would violate the dependency order required for successful Microsoft Entra Verified ID configuration. For instance, attempting to configure the Verified ID service before registering the application or creating a Key Vault would fail.
Note: The discussion indicates that a similar question appeared on an exam with different answer choices. While the provided answer is considered correct based on the given information and user consensus, there might be slight variations depending on the specific context of the question’s options.
You have an Azure subscription that contains an Azure web app named App1. You plan to configure a Conditional Access policy for App1. The solution must meet the following requirements:
• Only allow access to App1 from Windows devices.
• Only allow devices that are marked as compliant to access App1.
Which Conditional Access policy settings should you configure? To answer, drag the appropriate settings to the correct requirements. Each setting may be used once, more than once, or not at all. (Images depicting drag-and-drop options are omitted as they are not directly included in the provided text, but were present in the original post).
**
** To meet the requirements, you should configure the following Conditional Access policy settings:
- Only allow access to App1 from Windows devices: Under Conditions, select Device platforms, and then choose Windows. This ensures that only access requests originating from Windows devices are permitted.
- Only allow devices that are marked as compliant to access App1: Under Conditions, select Device compliance. This restricts access only to devices that have been assessed as compliant by your device management system (Intune, etc.). Then, in the Grant controls, ensure that access is allowed only for compliant devices. This step needs to be executed after the “Conditions” step to ensure the access is granted only to compliant devices.
The overwhelming consensus in the discussion supports this solution. Multiple users reported success with this approach on their exams.
Why other options are incorrect: The provided discussion doesn’t offer alternative options, but implicitly, any option that doesn’t include both “Device platforms” under Conditions to select Windows, and “Device compliance” under Conditions (along with an appropriate grant control) would be incorrect because it would not fulfill both requirements of the question. For example, failing to specify “Windows” under device platforms would allow access from non-Windows devices, while omitting “Device compliance” would allow access from non-compliant devices.
You have an Azure subscription that contains a resource group named RG1 and an Azure policy named Policy1. You need to assign Policy1 to RG1. How should you complete the script? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.
(Image shows a drag-and-drop question with blanks for: 1. Get-AzPolicyDefinition 2. New-AzPolicyAssignment and placeholders for “Policy definition” and “Scope” )
The correct solution involves using Get-AzPolicyDefinition
to retrieve the policy definition and New-AzPolicyAssignment
to assign it to the resource group. The Scope
parameter in New-AzPolicyAssignment
should specify the resource group’s ID. The provided image of the suggested answer shows the correct solution. The Get-AzPolicyDefinition
command retrieves the policy definition object, which is then passed to the -PolicyDefinition
parameter of New-AzPolicyAssignment
. The Scope
parameter of New-AzPolicyAssignment
is set to the resource group’s ID, correctly targeting the assignment to RG1.
The discussion shows general agreement on the correct answer using these two commands. Several users confirm that the suggested answer is correct and cite Microsoft documentation supporting this approach. There is no significant disagreement within the discussion.
View Question You have an Azure subscription that contains the virtual machines shown in the following table.
Which computers will support file integrity monitoring?
A. Computer2 only
B. Computer1 and Computer2 only
C. Computer2 and Computer3 only
D. Computer1, Computer2, and Computer3
D. Computer1, Computer2, and Computer3
The provided image (missing from this text-based response, but present in the original URL) shows a table of virtual machines with their respective operating systems. File integrity monitoring is a security feature that can generally be implemented on various operating systems, including Windows Server and Linux distributions (like the ones likely represented in the missing image). Therefore, all three computers (Computer1, Computer2, and Computer3) would likely support file integrity monitoring, assuming appropriate agents or software are installed. The question does not provide details ruling out support for any specific machine.
Why other options are incorrect: Options A, B, and C incorrectly limit the number of computers capable of supporting file integrity monitoring. The question implies that all machines would be capable, subject to proper configuration.
Note: The provided discussion only shows the suggested answer and a user selecting answer D. No conflicting opinions are presented within this limited discussion. A full analysis would require access to the image to verify OS types and thus definitively confirm answer D.
You have an Azure subscription that contains the virtual machines shown in the following table.
Subnet1 and Subnet2 have a network security group (NSG). The NSG has an outbound rule that has the following configurations:
• Port: Any
• Source: Any
• Priority: 100
• Action: Deny
• Protocol: Any
• Destination: Storage
The subscription contains a storage account named storage1.
You create a private endpoint named Private1 that has the following settings:
• Resource type: Microsoft.Storage/storageAccounts
• Resource: storage1
• Target sub-resource: blob
• Virtual network: VNet1
• Subnet: Subnet1
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
**
** The correct answer is Yes, Yes, No.
- Statement 1 (From VM2 you can create a container in storage1?): No. VM2 is in Subnet2, which is affected by the NSG’s outbound rule denying all traffic to storage accounts. The private endpoint only applies to Subnet1, so VM2 cannot access storage1.
- Statement 2 (From VM1 you can upload data to the blob storage of storage1?): Yes. VM1 is in Subnet1, where the private endpoint Private1 is configured. The private endpoint creates a private connection to storage1, bypassing the NSG’s blocking rule. Therefore, VM1 can access and upload data to storage1.
- Statement 3 (From VM2, you can upload data to the blob storage of storage1?): No. As explained above, the NSG blocks VM2’s access to storage1, and the private endpoint is not accessible from Subnet2.
Why other options are incorrect: The discussion shows disagreement on the correct answer. Some users incorrectly believe the private endpoint provides VNet-wide access, which is not the case. The private endpoint’s scope is limited to the subnet it’s deployed in (Subnet1 in this case). Therefore, answers suggesting “Yes, Yes, Yes” are incorrect because they fail to account for the NSG’s restrictive outbound rule applied to VM2. The answer suggesting “NYN” is partially correct, but fails to account for the functionality of the Private Endpoint granting access to VM1.
On Monday, you configure an email notification in Microsoft Defender for Cloud to notify [emailprotected] about alerts that have a severity level of Low, Medium, or High. On Tuesday, Microsoft Defender for Cloud generates the security alerts shown in the following table.
(Image shows a table of alerts with timestamps, severity (High, Medium, Low), and descriptions. The exact content isn’t provided but is crucial to answering the question.)
How many email notifications will [emailprotected] receive on Tuesday? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
(Image shows a multiple choice answer area. The exact options aren’t provided but are implied by the suggested answer.)
(Image shows the suggested answer: 3 and 7. It indicates 3 emails for medium-severity and 7 for high-severity.)
**
** The suggested answer, 3 and 7, is likely correct based on the provided discussion, but the exact number depends on the content of the missing image showing the alert table. The discussion mentions that Defender for Cloud limits emails: approximately 4 high-severity, 2 medium-severity, and 1 low-severity emails per day. Therefore, if the table showed enough alerts of each severity to hit these limits, then the answer would be 4 for high and 2 for medium (although 3 and 7 is offered as a suggested solution in the images). The disagreement in the discussion highlights the uncertainty due to the missing alert data in the image.
WHY OTHER OPTIONS ARE INCORRECT (SPECULATIVE): Without the full table data shown in the missing image, it’s impossible to definitively say why other options would be incorrect. However, any answer exceeding the daily limits (4 high, 2 medium, 1 low) for a given severity would be incorrect due to the throttling mechanism described in the discussion. Answers significantly lower than the limits might indicate a miscounting of alerts in each severity level.
NOTE: The provided text lacks the crucial information of the alert table. The answer relies heavily on the interpretation of the discussion and the suggested answer, acknowledging the uncertainty introduced by the missing image data.