Practice Questions - Microsoft AZ-104 Flashcards

(494 cards)

1
Q

View Question You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure?
A. a health probe
B. Floating IP (direct server return) to Enabled
C. Session persistence to Client IP and protocol
D. Protocol to UDP

A

The correct answer is C. Session persistence to Client IP and protocol.

Session persistence, also known as sticky sessions, ensures that subsequent requests from the same client are directed to the same server. By configuring session persistence to Client IP and protocol, the load balancer uses the client’s IP address and the protocol (HTTP or HTTPS) to identify the client and direct all subsequent requests from that client to the same server. This maintains the user’s session state across multiple requests.

Why other options are incorrect:

  • A. a health probe: Health probes are used to monitor the health of the backend servers. They do not ensure session persistence.
  • B. Floating IP (direct server return) to Enabled: Floating IP addresses are used for high availability, allowing a single IP address to be associated with multiple servers. While this enhances availability, it doesn’t guarantee session persistence. A client may be directed to a different server on each request.
  • D. Protocol to UDP: UDP is a connectionless protocol. Using UDP would not provide the necessary statefulness required for session persistence. Web servers generally use TCP.

Note: The discussion section shows a high degree of repetition of this question in various Azure certification exam preparation materials. This suggests the question’s importance and frequency in the exam itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

View Question You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure?
A. Session persistence to Client IP and protocol
B. Protocol to UDP
C. Session persistence to None
D. Floating IP (direct server return) to Disabled

A

A. Session persistence to Client IP and protocol

Explanation: To ensure that each visitor is serviced by the same web server for every request, you need to configure session persistence. Session persistence tracks client requests and directs subsequent requests from the same client to the same server. Using “Client IP and protocol” as the persistence method ensures that requests originating from the same IP address using the same protocol (e.g., HTTP) are always routed to the same server.

Why other options are incorrect:

  • B. Protocol to UDP: This option is irrelevant to session persistence. UDP is a connectionless protocol; using UDP would not ensure that requests from the same client are routed to the same server.
  • C. Session persistence to None: This would disable session persistence, resulting in requests being load balanced across all servers, defeating the requirement.
  • D. Floating IP (direct server return) to Disabled: Floating IPs are typically used for high availability, allowing the IP address to remain the same even if the underlying server changes. Disabling this feature is not relevant to ensuring a client always connects to the same server.

Note: The discussion section shows a significant number of comments expressing frustration and humor regarding this question, possibly indicating it’s a frequently encountered or challenging question in the context of the AZ-104 exam. This suggests there may be some ambiguity or difficulty in understanding the question or its implications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

** View Question You plan to create an Azure Storage account named storage1 that will contain a file share named share1. You need to ensure that share1 can support SMB Multichannel. The solution must minimize costs. How should you configure storage?
A. Premium performance with locally-redundant storage (LRS)
B. Standard performance with zone-redundant storage (ZRS)
C. Premium performance with geo-redundant storage (GRS)
D. Standard performance with locally-redundant storage (LRS)

**

A

** A. Premium performance with locally-redundant storage (LRS)

Explanation: The question specifies that the solution must minimize costs and support SMB Multichannel. According to the provided discussion and linked Microsoft documentation, SMB Multichannel is only supported on Premium file shares. Between LRS and ZRS (both are options for Premium file shares), LRS is the more cost-effective option because data redundancy is within a single data center, unlike ZRS which replicates data across availability zones within a region incurring extra cost. Options B, C, and D are incorrect because they either use standard performance (which doesn’t support SMB Multichannel) or a more expensive storage redundancy option (GRS) than necessary.

Why other options are incorrect:

  • B. Standard performance with zone-redundant storage (ZRS): Standard performance storage does not support SMB Multichannel.
  • C. Premium performance with geo-redundant storage (GRS): While this offers Premium performance and therefore supports SMB Multichannel, GRS is more expensive than LRS because it replicates data across geographical regions.
  • D. Standard performance with locally-redundant storage (LRS): This does not support SMB Multichannel because it uses standard, not premium, storage.

Note: The discussion shows some disagreement on whether ZRS would be acceptable, but the consensus and the provided Microsoft documentation support the selection of LRS for cost minimization while ensuring SMB Multichannel functionality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

View Question
You have an Azure subscription named Subscription1 that contains a virtual network named VNet1. VNet1 is in a resource group named RG1. Subscription1 has a user named User1. User1 has the following roles:
• Reader
• Security Admin
• Security Reader
You need to ensure that User1 can assign the Reader role for VNet1 to other users.
What should you do?
A. Remove User1 from the Security Reader role for Subscription1. Assign User1 the Contributor role for RG1.
B. Assign User1 the User Access Administrator role for VNet1.
C. Remove User1 from the Security Reader and Reader roles for Subscription1.
D. Assign User1 the Contributor role for VNet1.

A

The correct answer is B. Assign User1 the User Access Administrator role for VNet1.

The User Access Administrator role allows a user to manage access to resources within a specific scope (in this case, VNet1). This includes assigning roles to other users. The other options do not grant this level of permission.

Why other options are incorrect:

  • A. Remove User1 from the Security Reader role for Subscription1. Assign User1 the Contributor role for RG1. The Contributor role for the resource group gives User1 broad permissions within the resource group, but not specifically the ability to assign roles to other users on VNet1. Removing the Security Reader role is irrelevant to the problem.
  • C. Remove User1 from the Security Reader and Reader roles for Subscription1. Removing these roles doesn’t grant User1 the necessary permissions to assign roles to others on VNet1.
  • D. Assign User1 the Contributor role for VNet1. While the Contributor role grants extensive permissions on VNet1, it does not explicitly include the ability to assign roles to other users. Only the User Access Administrator or Owner roles can do that.

Note: The discussion thread shows overwhelming agreement that option B is the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

View Question

You have a Windows 11 device named Device1 and an Azure subscription that contains the resources shown in the following table.

Device1 has Azure PowerShell and Azure Command-Line Interface (CLI) installed. From Device1, you need to establish a Remote Desktop connection to VM1. Which three actions should you perform in sequence? Move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. The image showing the list of actions is omitted here because it’s not directly included in the provided text. However, the suggested answer shows that the correct actions are related to Azure Bastion configuration and connection.

Resource Group | Resource Name | Resource Type | Location | SKU |
|—|—|—|—|—|
| RG1 | VM1 | Virtual Machine | East US | Standard_B2s |
| RG1 | Bastion1 | Azure Bastion | East US | Basic |

A

The provided text describes a solution leveraging Azure Bastion to connect to VM1. The correct sequence of actions would be:

  1. Upgrade Azure Bastion SKU: The initial Azure Bastion (Bastion1) is using a Basic SKU, which doesn’t support the native client connection feature needed for direct RDP connection. It must be upgraded to a Standard SKU.
  2. Enable Native Client Support: After upgrading the SKU, native client support must be explicitly enabled within the Bastion1 configuration settings in the Azure portal.
  3. Connect via Azure CLI: Once the Bastion is correctly configured, the Azure CLI can be used to initiate the RDP connection to VM1. This would involve commands like az login, az account set --subscription "<subscription ID>" (to select the correct subscription), and then a command to establish the RDP connection through Bastion.

The exact RDP connection command via Azure CLI isn’t explicitly provided, but it’s implied as the final step in the process.

Why other options are incorrect: The discussion highlights that a basic Azure Bastion setup won’t allow a direct RDP connection using the native client. Other approaches, if presented as options, which don’t involve upgrading the Bastion SKU and enabling native client support, would be incorrect.

Note: The provided discussion and suggested answer imply a solution using Azure Bastion. Other methods to connect to a VM (e.g., using a public IP address and configuring inbound rules in the network security group) are not considered in this context. There is no explicit disagreement within the discussion itself, but the implication is that using the basic SKU of Azure Bastion is insufficient for this task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

** View Question You have an Azure subscription that contains the resources shown in the following table. You need to assign User1 the Storage File Data SMB Share Contributor role for share1. What should you do first?
A. Enable identity-based data access for the file shares in storage1.
B. Modify the security profile for the file shares in storage1.
C. Select Default to Azure Active Directory authorization in the Azure portal for storage1.
D. Configure Access control (IAM) for share1.

**

A

** D. Configure Access control (IAM) for share1.

To assign User1 the Storage File Data SMB Share Contributor role for share1, you must first configure the Access Control (IAM) for that specific share. IAM allows you to manage access to Azure resources, including file shares. Assigning the role directly through IAM is the primary method for granting granular permissions to specific users on a file share.

WHY OTHER OPTIONS ARE INCORRECT:

  • A. Enable identity-based data access for the file shares in storage1: While identity-based data access is important for certain scenarios, it’s not the first step. Enabling identity-based authentication is a broader account-level setting that may be necessary before assigning permissions via IAM but configuring IAM is the direct action required. The discussion highlights disagreement on this point, with some arguing for Option A.
  • B. Modify the security profile for the file shares in storage1: Modifying the security profile is a broader, less precise method than directly managing roles and permissions using IAM. IAM provides granular control over individual user access.
  • C. Select Default to Azure Active Directory authorization in the Azure portal for storage1: This sets a default authorization at the storage account level, not for specific file shares within it. It’s a global setting, not the necessary first step for assigning specific roles to a user on a specific share.

NOTE: The discussion section shows some disagreement on the correct answer, with a strong argument made for option A. The answer provided above reflects the suggested answer and a more common approach to managing access control in Azure. The provided context does not definitively resolve this discrepancy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

View Question You have an Azure subscription named Subscription1. You have 5 TB of data that you need to transfer to Subscription1. You plan to use an Azure Import/Export job. What can you use as the destination of the imported data?
A. an Azure Cosmos DB database
B. Azure Blob Storage
C. Azure SQL Database
D. the Azure File Sync Storage Sync Service

A

The correct answer is B. Azure Blob Storage. Azure Import/Export service is designed to transfer large amounts of data to Azure Blob storage. The service uses physical disks shipped to and from Microsoft to facilitate this transfer. Other options are not suitable because they don’t directly support importing data via the Azure Import/Export service.

A, C, and D are incorrect because Azure Import/Export jobs specifically use Azure Blob storage as a destination. While data might eventually end up in Cosmos DB, SQL Database, or File Sync, it wouldn’t be the direct destination of the Import/Export job. The data would first need to be imported into Blob storage and then subsequently moved to these other services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

** View Question You have an Azure subscription named Subscription1. You have 5 TB of data that you need to transfer to Subscription1. You plan to use an Azure Import/Export job. What can you use as the destination of the imported data?
A. Azure Data Lake Store
B. Azure File Storage
C. Azure SQL Database
D. the Azure File Sync Storage Sync Service

**

A

** B. Azure File Storage

Azure Import/Export service is designed to transfer large amounts of data to Azure Blob storage or Azure Files. Azure Files is a file-sharing service in the cloud that provides fully managed file shares accessible via the standard SMB protocol. Therefore, Azure File Storage is the correct destination for data imported via the Azure Import/Export job.

Why other options are incorrect:

  • A. Azure Data Lake Store: While Azure Data Lake Store can handle large datasets, it’s not a direct destination for Azure Import/Export jobs. Import/Export primarily targets blob storage and file shares.
  • C. Azure SQL Database: Azure SQL Database is a relational database service; it’s not designed for directly importing large datasets via the Import/Export service. Data would need to be processed and loaded differently.
  • D. the Azure File Sync Storage Sync Service: This service synchronizes files between on-premises servers and Azure File Storage. It’s not the direct destination for an Azure Import/Export job.

Note: The discussion highlights that this is a frequently repeated question on the exam. The high number of comments and upvotes indicate that many test-takers find this question important and challenging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

View Question You have an Azure subscription that contains a storage account named storage1. You plan to use conditions when assigning role-based access control (RBAC) roles to storage1. Which storage1 services support conditions when assigning roles?
A. containers only
B. file shares only
C. tables only
D. queues only
E. containers and queues only
F. files shares and tables only

A

E. containers and queues only

Explanation: Based on the provided text, containers (blob storage) and queues are the storage services that support conditions when assigning RBAC roles in Azure. The discussion highlights this as the correct answer.

Why other options are incorrect: Options A, B, C, D, and F are incorrect because they either only list one service that doesn’t support conditions or list a combination that includes a service that doesn’t. The discussion specifically points to containers and queues as the only services supporting conditions.

Note: The discussion shows some users found the AZ-104 exam challenging, but this doesn’t impact the technical answer to the question about RBAC conditions and Azure storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

View Question You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use?
A. the Publish-AzVMDscConfiguration cmdlet
B. Azure Application Insights
C. a Desired State Configuration (DSC) extension
D. Azure AD Application Proxy

A

C. a Desired State Configuration (DSC) extension

A Desired State Configuration (DSC) extension is the best approach to ensure NGINX is consistently installed and configured across all VMs within an Azure virtual machine scale set. DSC allows you to define the desired state (in this case, NGINX being installed and running) and the extension will automatically manage the VMs to achieve that state.

Why other options are incorrect:

  • A. the Publish-AzVMDscConfiguration cmdlet: This cmdlet is used to generate a DSC configuration file, but it doesn’t automatically deploy or manage the configuration on the VMs. It’s a step in the process, but not the solution itself.
  • B. Azure Application Insights: This is a monitoring service, not a configuration management tool. It would help you monitor the health of NGINX after deployment, but it won’t install or configure it.
  • D. Azure AD Application Proxy: This service enables external access to on-premises applications, completely unrelated to installing software on VMs.

Note: The discussion mentions that Azure Custom Script Extension is also a viable option. While true, given the options provided in the question, the DSC extension is the more accurate and directly applicable answer. The discussion highlights a potential discrepancy or ambiguity in the original question’s context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

View Question
You have an Azure subscription named Subscription1 that contains a virtual network named VNet1. VNet1 is in a resource group named RG1. Subscription1 has a user named User1. User1 has the following roles:
• Reader
• Security Admin
• Security Reader
You need to ensure that User1 can assign the Reader role for VNet1 to other users.
What should you do?
A. Remove User1 from the Security Reader role for Subscription1. Assign User1 the Contributor role for RG1.
B. Assign User1 the Access Administrator role for VNet1.
C. Remove User1 from the Security Reader and Reader roles for Subscription1. Assign User1 the Contributor role for Subscription1.
D. Assign User1 the Network Contributor role for RG1.

A

The correct answer is B. Assign User1 the Access Administrator role for VNet1.

The Access Administrator role provides the necessary permissions to assign roles at the resource level. Specifically, it grants the “Microsoft.Authorization/roleAssignments/write” permission, allowing User1 to add and remove role assignments for VNet1, including assigning the Reader role to other users. This is a least privilege approach, granting only the required permissions.

Why other options are incorrect:

  • A. Remove User1 from the Security Reader role for Subscription1. Assign User1 the Contributor role for RG1: Removing the Security Reader role doesn’t grant the ability to assign roles. The Contributor role at the Resource Group level allows management of resources within RG1, but not the assignment of roles at the VNet1 level.
  • C. Remove User1 from the Security Reader and Reader roles for Subscription1. Assign User1 the Contributor role for Subscription1: Removing the existing roles is unnecessary and reduces User1’s access. Assigning the Contributor role at the subscription level grants excessive permissions beyond what’s needed.
  • D. Assign User1 the Network Contributor role for RG1: The Network Contributor role allows management of network resources (subnets, load balancers, etc.), but it doesn’t grant permissions to assign roles.

Note: While the discussion suggests option B is the correct answer, there is a minor disagreement on whether it should be “User” Access Administrator or simply Access Administrator. The provided answer reflects the most widely accepted interpretation from the discussion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

View Question You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure?
A. Floating IP (direct server return) to Enabled
B. Session persistence to Client IP
C. Protocol to UDP
D. Idle Time-out (minutes) to 20

A

The correct answer is B. Session persistence to Client IP.

Session persistence, specifically using the client IP address, ensures that requests from the same client are always directed to the same server. This is crucial for maintaining session state, such as shopping carts or user logins, across multiple requests. Other load balancing methods might distribute requests across different servers, leading to session inconsistencies.

Why other options are incorrect:

  • A. Floating IP (direct server return) to Enabled: While floating IPs allow for server failover, they don’t inherently guarantee that a client will always be directed to the same server.
  • C. Protocol to UDP: Changing the protocol to UDP is irrelevant to maintaining session consistency. UDP is a connectionless protocol, unsuitable for applications requiring persistent connections and state management.
  • D. Idle Time-out (minutes) to 20: This setting manages the duration a connection remains active before being terminated by the load balancer. It doesn’t address the core issue of consistently routing requests from the same client to the same server.

Note: The discussion section reveals that this question appears frequently in practice exams, leading to many comments expressing frustration and familiarity with it. There is no disagreement on the correct answer within the provided text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

View Question
You have three Azure subscriptions named Sub1, Sub2, and Sub3 that are linked to an Azure AD tenant. The tenant contains a user named User1, a security group named Group1, and a management group named MG1. User1 is a member of Group1. Sub1 and Sub2 are members of MG1. Sub1 contains a resource group named RG1. RG1 contains five Azure functions. You create the following role assignments for MG1:
• Group1: Reader
• User1: User Access Administrator
You assign User1 the Virtual Machine Contributor role for Sub1 and Sub2. Can User1 perform the following actions? (Yes/No/Not Applicable)

  • Read data from Azure functions in RG1 in Sub1.
  • Create a new Azure function in RG1 in Sub1.
  • Create a new virtual machine in Sub3.
A

YNY

  • Read data from Azure functions in RG1 in Sub1 (YES): User1 is a member of Group1, which has the Reader role assigned at the MG1 level. Sub1 is within MG1, and RG1 (containing the Azure functions) is within Sub1. The Reader role allows read access, so User1 can read data from the Azure functions. Additionally, User1 has the Virtual Machine Contributor role at the subscription level for Sub1, which grants broader permissions including read access.
  • Create a new Azure function in RG1 in Sub1 (NO): While User1 has the Virtual Machine Contributor role for Sub1, this role does not grant permission to create Azure Functions. The Reader role assigned at the MG1 level for Group1 only allows read access. Creating a new resource requires at least Contributor role permissions at the resource group or subscription level.
  • Create a new virtual machine in Sub3 (NO): User1 only has Virtual Machine Contributor role assigned to Sub1 and Sub2. Sub3 is not included in this assignment, so User1 does not have the necessary permissions to create a virtual machine in Sub3. The User Access Administrator role assigned at the management group level is not sufficient for this action.

Note: The provided discussion shows a user suggesting a different answer (YYN). This highlights a potential area of ambiguity or differing interpretations of the role-based access control (RBAC) model in Azure. The above answer provides a reasoned explanation based on the standard understanding of Azure RBAC roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

** View Question

You have an Azure subscription that contains the resource groups shown in the following table.
(Image shows a table of Resource Groups, Locations, and Subscriptions)

The subscription contains the virtual networks shown in the following table.
(Image shows a table of Virtual Networks, Addresses, Locations, and Resource Groups)

You plan to deploy the Azure Kubernetes Service (AKS) clusters shown in the following table.
(Image shows a table of AKS Clusters, Locations, Resource Groups, and Network Plugins)

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
(Image shows three statements:
1. You can deploy AKS cluster1.
2. You can deploy AKS cluster2.
3. You can deploy AKS cluster3.)

**

A

** N Y Y

  • Statement 1 (AKS cluster1): No. AKS cluster1 is planned for the East US location, but the only available VNet in that location (VNet1) is in the RG1 resource group, which is a different resource group than the planned one for AKS cluster1. AKS requires the VNet and the AKS cluster to be in the same Resource Group, unless a different network configuration is used that is not stated in the question.
  • Statement 2 (AKS cluster2): Yes. AKS cluster2 is planned to use Azure CNI and is in the same location (West US) as VNet2, and its associated resource group (RG2). The provided information does not indicate any pod number limitations.
  • Statement 3 (AKS cluster3): Yes. AKS cluster3 is planned to use kubenet and an existing VNet (VNet3). The documentation referenced in the discussion confirms that AKS supports using an existing VNet and route table with kubenet.

Why other options are incorrect: The answer derives directly from the constraints of AKS deployments as documented in Microsoft’s learning materials. Incorrect selections fail to satisfy the location and resource group matching requirements for AKS deployments with the specified network plugins (kubenet and Azure CNI). The discussion itself highlights the importance of location and resource group matching for successful AKS deployments. Note that there is some discussion on user-assigned vs. system-assigned control plane identities, but this is not relevant to answering the provided yes/no questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

** View Question

HOTSPOT
You have an Azure subscription. You plan to deploy a new storage account. You need to configure encryption for the account. The solution must meet the following requirements:
• Use a customer-managed key stored in a key vault.
• Use the maximum supported bit length.
Which type of key and which bit length should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
(Image shows a selection box for “Key type” and “Bit Length”)
(Image shows the suggested answer: Key type = RSA, Bit Length = 4096)

**

A

** The correct answer is Key type: RSA, Bit length: 4096. The provided Microsoft documentation and the majority of the discussion participants support this answer. RSA 4096-bit is the maximum supported bit length for customer-managed keys used for Azure Storage encryption.

Why other options are incorrect: The discussion shows some conflicting answers (e.g., 4069, 4098), but these are outliers and contradicted by the majority opinion and the suggested answer. Other key types might be supported for other encryption scenarios, but the question specifically states to use a customer-managed key in a key vault, for which RSA 4096 is the accepted maximum bit length.

Note: There is some disagreement amongst users regarding the exact bit length, with some suggesting 4069 and 4098. However, the consensus, supported by the suggested answer and several highly voted comments, points to 4096 as the correct bit length.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

** View Question

You have an Azure subscription. You need to deploy a virtual machine by using an Azure Resource Manager (ARM) template. How should you complete the template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

(The image shows a partially completed ARM template JSON snippet with blanks for “dependsOn” and “storageProfile” properties within the “virtualMachines” section.)

(The image shows the suggested answer which fills in the blanks with "dependsOn": "[resourceId('Microsoft.Network/networkInterfaces', 'myNic')]", and "imageReference": { "publisher": "MicrosoftWindowsServer", "offer": "WindowsServer", "sku": "2019-datacenter", "version": "latest" }. Note the typo in the first image’s suggested answer: “resoureceID” should be “resourceId”)

**

A

** The correct answer requires completing the ARM template’s dependsOn and storageProfile properties for the virtual machine.

  • dependsOn: This property specifies that the virtual machine’s creation depends on the successful creation of a Network Interface, in this case, myNic. The value should be "[resourceId('Microsoft.Network/networkInterfaces', 'myNic')]". This ensures the network is ready before the VM is deployed, preventing deployment failures.
  • storageProfile: This property defines the storage configuration of the VM. It uses imageReference to specify the OS image. The provided example uses a Windows Server 2019 image from the Microsoft gallery: "publisher": "MicrosoftWindowsServer", "offer": "WindowsServer", "sku": "2019-datacenter", "version": "latest".

Therefore, the complete virtualMachines section should look like the solution shown in image677.png.

Why other options are incorrect: There are no other options explicitly provided in the question. The discussion shows a consensus among users that the provided suggested answer is correct. However, any other configuration that did not define the dependencies correctly or did not specify a valid image would be incorrect. Note that the dependsOn value must correctly reference the resource ID of the network interface and not a placeholder, such as “resoureceID” as shown in the initial image.

Note: While there’s a general agreement among the discussion participants, the lack of alternative options makes definitive assessment of incorrect options challenging. The answer focuses on the key concepts based on the provided information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

** View Question

Based on the provided images showing Azure alerts, use the drop-down menus to select the answer choice that completes each statement. NOTE: Each correct selection is worth one point. The images show a hotspot question with two dropdowns. The first dropdown shows an alert with a status of “Closed”. The second dropdown shows an alert with a status of “New”.

**

A

**

  • Box 1 (Closed Alert): The status can be changed to either “New” or “Acknowledged”. Changing the user response from “Closed” to either “New” or “Acknowledged” reopens the alert.
  • Box 2 (New Alert): The status can be changed to either “Acknowledged” or “Closed”. A “New” alert can be acknowledged or closed depending on whether immediate action is required or if the issue is resolved.

The consensus among the discussion participants is that these are the correct answers. The provided Microsoft Learn link further supports this.

Why other options are incorrect: There are no other options explicitly provided in the question or discussion to assess as incorrect. The question focuses on the possible transitions from “Closed” and “New” alert states. Any other state transitions are not covered by this particular question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

View Question HOTSPOT - You have an Azure AD tenant named adatum.com that contains the groups shown in the following table. Adatum.com contains the users shown in the following table. You assign an Azure Active Directory Premium P2 license to Group1 as shown in the following exhibit. Group2 is NOT directly assigned a license. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.

A

The correct answer is YNN.

  1. Y: It is possible to assign additional licenses to a user individually, even if they already receive a license through group assignment. The licenses will be cumulative. This is supported by the discussion, with multiple users reporting successful testing of this scenario.
  2. N: You cannot remove the P2 license from User1 without removing them from Group1 or removing the license assignment from Group1. The license is inherited through group membership; removing the license at the individual level is prevented when inherited. This is corroborated by user testing in the discussion.
  3. N: Nested group licensing is not supported. A license assigned to a parent group does not automatically extend to child groups. This is confirmed by multiple users in the discussion.

There is some disagreement in the discussion regarding the precise behavior of license assignment, particularly around testing scenarios and specific license types. However, the consensus among the users who have performed testing supports the YNN answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

View Question
You have an Azure subscription that has the public IP addresses shown in the following table.

You plan to deploy an Azure Bastion Basic SKU host named Bastion1. Which IP addresses can you use?
A. IP1 only
B. IP1 and IP2 only
C. IP3, IP4, and IP5 only
D. IP1, IP2, IP4, and IP5 only
E. IP1, IP2, IP3, IP4, and IP5

A

A. IP1 only

Explanation: Based on the provided discussion, only IP addresses that are IPv4, Static, Basic, and Regional can be used with an Azure Bastion Basic SKU. The user “eliasalg” performed tests and found that only this configuration worked. All other configurations resulted in errors. Therefore, only IP1 meets these criteria.

Why other options are incorrect:

  • B, C, D, E: These options include IP addresses that are either Global, Standard, Dynamic, or IPv6, all of which were shown to be incompatible with the Azure Bastion Basic SKU based on the testing results in the discussion.

Note: The discussion shows some conflicting information and potential ambiguity around error messages. However, the testing results provided by “eliasalg” form the basis of this answer. There might be additional factors that could influence the selection based on Azure documentation which isn’t included.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

View Question
You have two Azure subscriptions named Sub1 and Sub2. Sub1 contains a virtual machine named VM1 and a storage account named storage1. VM1 is associated to the resources shown in the following table.

You need to move VM1 to Sub2. Which resources should you move to Sub2?
A. VM1, Disk1, and NetInt1 only
B. VM1, Disk1, and VNet1 only
C. VM1, Disk1, and storage1 only
D. VM1, Disk1, NetInt1, and VNet1

Resource | Resource Type |
|—|—|
| Disk1 | OS Disk |
| NetInt1 | Network Interface |
| VNet1 | Virtual Network |
| storage1 | Storage Account |

A

D. VM1, Disk1, NetInt1, and VNet1

To successfully move a virtual machine (VM) to another Azure subscription, all its dependent resources must be moved as well. In this scenario, VM1 depends on its OS disk (Disk1), its network interface (NetInt1), and the virtual network it’s connected to (VNet1). The storage account (storage1) is not directly dependent on or associated with VM1; therefore, it does not need to be moved. Moving only some of the dependent resources will result in the VM not functioning correctly in the new subscription.

Why other options are incorrect:

  • A. VM1, Disk1, and NetInt1 only: This is incorrect because it omits VNet1, which is crucial for the VM’s network connectivity.
  • B. VM1, Disk1, and VNet1 only: This is incorrect because it omits NetInt1, which is the specific network interface attached to the VM.
  • C. VM1, Disk1, and storage1 only: This is incorrect because it omits both NetInt1 and VNet1, which are essential for network functionality, and storage1 is not a dependent resource.

Note: The provided discussion shows consensus on the correct answer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

View Question You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure?
A. Floating IP (direct server return) to Disabled
B. Floating IP (direct server return) to Enabled
C. a health probe
D. Session persistence to Client IP

A

D. Session persistence to Client IP

Explanation: To ensure that visitors are serviced by the same web server for each request, you need to configure session persistence. Session persistence (specifically, Client IP persistence in this context) on the Azure Load Balancer (LB1) will maintain the connection to the same backend server for a given client’s IP address throughout the session. This ensures that a client always connects to the same web server.

Why other options are incorrect:

  • A. Floating IP (direct server return) to Disabled: Disabling direct server return doesn’t guarantee session persistence. Direct server return is a different feature related to source IP address preservation, not session stickiness.
  • B. Floating IP (direct server return) to Enabled: Enabling direct server return also doesn’t directly ensure that a client always goes to the same server. While it preserves the source IP, it doesn’t inherently tie a client’s session to a specific server.
  • C. a health probe: Health probes ensure that only healthy virtual machines receive traffic. It doesn’t influence session persistence or which server a client connects to.

Note: The discussion section shows a high degree of familiarity and even humorous commentary from multiple users regarding this question, suggesting it may be a frequently seen or particularly memorable question from a training resource or practice exam. There is no disagreement on the correct answer, however.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

View Question You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure?
A. Floating IP (direct server return) to Enabled
B. Idle Time-out (minutes) to 20
C. a health probe
D. Session persistence to Client IP

A

The correct answer is D. Session persistence to Client IP.

Session persistence, specifically using the client IP address, ensures that requests from the same client IP address are always directed to the same backend server (web server in this case). This maintains the user’s session across multiple requests, preventing the user from being redirected to a different server each time they make a request. This is crucial for maintaining stateful applications where the server needs to remember the user’s context.

Why other options are incorrect:

  • A. Floating IP (direct server return) to Enabled: Floating IPs provide a single IP address for multiple servers, but it doesn’t inherently guarantee that a client will always connect to the same server.
  • B. Idle Time-out (minutes) to 20: Idle timeout manages the duration a connection remains open before being closed. It doesn’t address the need to route requests from the same client to the same server.
  • C. a health probe: Health probes check the availability of backend servers, ensuring that the load balancer only directs traffic to healthy servers. It is important for high availability but does not address session persistence.

Note: The discussion section shows a strong consensus among users that the correct answer is D. However, there is some initial uncertainty and humorous commentary in the discussion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

View Question
HOTSPOT -
You have an Azure subscription that has offices in the East US and West US Azure regions.
You plan to create the storage account shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.

A

To minimize network cost, select “Default routing tier to Internet Routing”. After creation, you can modify the “Encryption type”.

The provided suggested answer image (Image) supports this. The image shows Internet routing as the optimal choice for minimizing network costs, and Encryption type as modifiable after account creation.

There is a disagreement in the discussion regarding the modifiability of the routing tier after account creation. MichaelMelb states it can be modified, while ozansenturk (citing ChatGPT) contends it cannot be changed after creating a Premium File Share. The provided suggested answer aligns with MichaelMelb’s assertion. Further research may be needed to definitively resolve this conflicting information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

View Question You have an Azure subscription that contains a Recovery Services vault named Vault1. You need to enable multi-user authorization (MAU) for Vault1. Which resource should you create first?
A. an administrative unit
B. a managed identity
C. a resource guard
D. a custom Azure role

A

C. a resource guard

Explanation: Based on the provided discussion and link to Microsoft Learn documentation, enabling multi-user authorization (MUA) for a Recovery Services vault requires creating a Resource Guard first. The Microsoft Learn documentation explicitly lists “Create a Resource Guard” as a step before enabling MUA. A Resource Guard provides an additional layer of authorization to protect critical resources like Recovery Services Vaults from unauthorized modifications. Therefore, it is the prerequisite resource to be created before enabling MUA.

Why other options are incorrect:

  • A. an administrative unit: Administrative units are used for organizational management within Azure Active Directory, not directly related to enabling MUA on a Recovery Services vault.
  • B. a managed identity: Managed identities are used for service principals to access other Azure resources, not directly involved in the initial setup of MUA.
  • D. a custom Azure role: While custom Azure roles can be used after MUA is enabled to fine-tune permissions, they are not the initial resource required to enable the feature itself.

Note: The discussion shows a consensus among users that the correct answer is C. There is no conflicting opinion presented.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
[View Question](https://www.examtopics.com/discussions/databricks/view/113069-exam-az-104-topic-5-question-133-discussion/) Your on-premises network contains a VPN gateway. You have an Azure subscription that contains the resources shown in the following table. | Resource | Type | Location | |-----------------|------------------|----------------| | VM1 | Virtual machine | East US | | Storage1 | Storage account | East US | | VNet1 | Virtual Network | East US | | Subnet1 | Subnet | East US | You need to ensure that all the traffic from VM1 to storage1 travels across the Microsoft backbone network. What should you configure? A. Azure Application Gateway B. service endpoints C. Azure AD Application Proxy D. Azure Virtual WAN
B. Service Endpoints Service Endpoints provide secure and direct connectivity to Azure services over the Azure backbone network. By configuring a service endpoint for Storage1 within VNet1, traffic from VM1 (within VNet1) to Storage1 will use a private IP address and traverse the Microsoft backbone network, avoiding the public internet. This aligns directly with the requirement of ensuring all traffic between VM1 and Storage1 uses the Microsoft backbone network. Why other options are incorrect: * **A. Azure Application Gateway:** Application Gateways are primarily used for load balancing and web application security. They don't directly control the underlying network path for traffic between a VM and a storage account. * **C. Azure AD Application Proxy:** This service is for providing secure access to on-premises applications, not for routing traffic between Azure resources within the same region. * **D. Azure Virtual WAN:** While Virtual WAN provides connectivity between on-premises networks and Azure, it's overkill for this scenario. The question focuses on traffic *within* Azure, between VM1 and Storage1, both located in the same region. Using a Virtual WAN adds unnecessary complexity. Note: The discussion shows a consensus favoring option B as the correct answer.
26
**** [View Question](https://www.examtopics.com/discussions/databricks/view/113100-exam-az-104-topic-5-question-132-discussion/) You have an app named App1 that is installed on two Azure virtual machines named VM1 and VM2. Connections to App1 are managed by using an Azure Load Balancer. The effective network security configurations for VM2 are shown in the following exhibit. You discover that connections to App1 from 131.107.100.50 over TCP port 443 fail. You verify that the Load Balancer rules are configured correctly. You need to ensure that connections to App1 can be established successfully from 131.107.100.50 over TCP port 443. Solution: You create an inbound security rule that allows any traffic from the AzureLoadBalancer source and has a priority of 150. Does this meet the goal? A. Yes B. No **
** B. No The provided solution will not meet the goal. The discussion highlights that there's already a rule (priority 100) allowing traffic from 131.107.100.50 on port 443. Adding a rule with a higher priority (150) will *not* override the existing rule because a rule with priority 100 would take precedence. Furthermore, a key point raised in the discussion is that the source IP should be the Azure Load Balancer's IP address, not the client's IP (131.107.100.50). The existing rule is incorrectly using the client IP as the source. Creating a new rule with the Azure Load Balancer IP address and a high enough priority to override any conflicting rules is necessary. The suggested answer A is incorrect because it incorrectly assumes the solution would work. There is disagreement amongst the users in the discussion about the correct solution and the reasons why. Some users believe that the VM might be off, while others focus on the source IP address being incorrect and the priority of the rules.
27
[View Question](https://www.examtopics.com/discussions/databricks/view/113103-exam-az-104-topic-5-question-134-discussion/) You create an Azure VM named VM1 that runs Windows Server 2019. VM1 is configured as shown in the exhibit. (The exhibit shows a screenshot of an Azure VM named VM1 showing that its status is "Stopped".) You need to enable Desired State Configuration for VM1. What should you do first? A. Connect to VM1. B. Start VM1. C. Capture a snapshot of VM1. D. Configure a DNS name for VM1.
The correct answer is **B. Start VM1.** Desired State Configuration (DSC) requires the target virtual machine to be running and able to communicate with Azure to download the necessary configuration packages. The provided exhibit clearly shows VM1 is in a stopped state. Therefore, starting the VM is the crucial first step before any other action related to enabling DSC can be performed. Why other options are incorrect: * **A. Connect to VM1:** You cannot connect to a stopped virtual machine. The connection must be established *after* the VM is started. * **C. Capture a snapshot of VM1:** Taking a snapshot is unnecessary and does not address the core requirement of enabling DSC. It's a separate operation that can be done later, if needed. * **D. Configure a DNS name for VM1:** While DNS is important for network communication, it's not a prerequisite for enabling DSC on a VM. The VM can function and communicate with Azure (to download configuration packages) even without a DNS name configured. Note: The discussion thread unanimously agrees on option B as the correct answer.
28
[View Question](https://www.examtopics.com/discussions/databricks/view/113106-exam-az-104-topic-2-question-82-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com. You have a CSV file that contains the names and email addresses of 500 external users. You need to create a guest user account in contoso.com for each of the 500 external users. Solution: You create a PowerShell script that runs the New-MgUser cmdlet for each external user. Does this meet the goal? A. Yes B. No
B. No The `New-MgUser` cmdlet creates new users within Azure AD, but it's not the appropriate method for adding external guest users (B2B users). To add external guest users, you should use the `New-AzureADMSInvitation` cmdlet (from the AzureAD module) or its equivalent in the Microsoft Graph module, which involves sending invitations. The provided solution using `New-MgUser` will not create guest users capable of accessing resources; it would instead create standard users, potentially leading to access issues. The consensus among the discussion participants supports this answer. One user suggested a workaround, but this is not consistent with best practices for managing guest access. Why other options are incorrect: A. Yes - Incorrect. `New-MgUser` does not handle the invitation process required for guest users. It only creates standard users within the Azure AD tenant. This will not fulfil the requirement of creating guest accounts.
29
**** [View Question](https://www.examtopics.com/discussions/databricks/view/113107-exam-az-104-topic-2-question-81-discussion/) You have a hybrid deployment of Azure Active Directory (Azure AD) that contains the users shown in the following table. ![Image](https://img.examtopics.com/az-104/image683.png) You need to modify the JobTitle and UsageLocation attributes for the users. For which users can you modify the attributes from Azure AD? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image684.png) **
** Only User 1 (Cloud Only) can have their JobTitle modified in Azure AD. UsageLocation can be modified for all users (User 1, User 2, and User 3) in Azure AD. **Explanation:** The discussion highlights a key distinction: users synced from an on-premises Active Directory (AD) to Azure AD (User 2 in this case) cannot have their JobTitle modified directly in Azure AD. This is because Azure AD Connect typically synchronizes the JobTitle attribute, making changes in the on-premises AD necessary. For User 3 (Guest User), there's some disagreement in the discussion. Some users state that guest user's JobTitle *can* be modified in Azure AD. However, the most concise and upvoted answer states that only User 1's JobTitle can be modified from Azure AD. The UsageLocation attribute, however, can be modified for all users from Azure AD. **Why other options are incorrect:** The discussion shows a consensus that User 2's JobTitle cannot be modified in Azure AD due to on-premises synchronization. There is disagreement about User 3 (Guest User), however the most upvoted and arguably most accurate response states this user cannot have their JobTitle changed in Azure AD. Therefore, only User 1 meets the criteria for JobTitle modification directly within Azure AD. **Note:** There is some conflicting information regarding the modifiability of the JobTitle attribute for Guest users (User 3) in the provided discussion. The answer provided reflects the most upvoted and logically consistent interpretation from the discussion.
30
[View Question](https://www.examtopics.com/discussions/databricks/view/113129-exam-az-104-topic-3-question-74-discussion/) You have an Azure Storage account named storage1. For storage1, you create an encryption scope named Scope1. Which storage types can you encrypt by using Scope1? A. file shares only B. containers only C. file shares and containers only D. containers and tables only E. file shares, containers, and tables only F. file shares, containers, tables, and queues
B. containers only Explanation: Based on the provided text from the discussion, encryption scopes in Azure Storage are used to manage encryption with keys scoped to a container or an individual blob. While the question mentions using the encryption scope for different storage types, the discussion strongly supports the fact that only containers (and blobs within them) are currently supported. The Microsoft Learn documentation links further reinforce this. Why other options are incorrect: * **A, C, D, E, F:** These options include file shares, tables, and/or queues. The discussion explicitly states that encryption scopes, at the time of the discussion, only apply to containers (and blobs) within Azure Blob Storage. There is no mention of support for file shares, tables, or queues. Note: Although there's a mention of "C" being a selected answer in the discussion, the overwhelming consensus and supporting documentation point to "B" as the correct answer. The disagreement highlights a potential ambiguity or outdated information in the "C" selection.
31
[View Question](https://www.examtopics.com/discussions/databricks/view/113130-exam-az-104-topic-3-question-73-discussion/) You have an Azure Storage account that contains 5,000 blobs accessed by multiple users. You need to ensure that the users can view only specific blobs based on blob index tags. What should you include in the solution? A. a role assignment condition B. a stored access policy C. just-in-time (JIT) VM access D. a shared access signature (SAS)
A. a role assignment condition A role assignment condition allows for fine-grained access control based on specific criteria, such as tags applied to Azure resources. In this scenario, a condition can be set to only allow access to blobs with specific index tags. This directly addresses the requirement of controlling blob access based on tags. Why other options are incorrect: * **B. a stored access policy:** Stored access policies grant broad access to containers or blobs, not specific control based on tags. * **C. just-in-time (JIT) VM access:** JIT VM access is relevant for virtual machine access control, not for controlling access to blobs within a storage account. * **D. a shared access signature (SAS):** SAS provides time-limited access to specific resources. While flexible, it doesn't directly support access control based on blob index tags. Note: The provided discussion shows disagreement on the correct answer, with one user suggesting "A" and another suggesting "D". The explanation above supports answer "A" as the more appropriate solution based on the described functionality of Azure role assignment conditions and the problem statement.
32
**** [View Question](https://www.examtopics.com/discussions/databricks/view/113169-exam-az-104-topic-4-question-92-discussion/) HOTSPOT You need to configure a new Azure App Service app named WebApp1. The solution must meet the following requirements: • WebApp1 must be able to verify a custom domain name of app.contoso.com. • WebApp1 must be able to automatically scale up to eight instances. • Costs and administrative effort must be minimized. Which pricing plan should you choose, and which type of record should you use to verify the domain? To answer, select the appropriate options in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image686.png) *(Image shows a hotspot question with options for Pricing Tier and Record Type)* ![Image](https://img.examtopics.com/az-104/image687.png) *(Image shows the suggested answer: Pricing Tier - Standard; Record Type - CNAME or TXT)* **
** The correct pricing plan is **Standard**, and the correct record type for domain verification is **CNAME or TXT**. * **Pricing Plan:** The Standard tier offers auto-scaling (allowing scaling up to eight instances as required) and supports custom domains, fulfilling all requirements while minimizing costs compared to Premium or Isolated tiers. The Basic tier doesn't support auto-scaling, and Free/Shared tiers lack custom domain support. * **Record Type:** Azure uses either a CNAME (Canonical Name) or TXT (Text) record for domain verification. An A record is not used for this purpose. **Why other options are incorrect:** * **Basic Tier:** Doesn't support auto-scaling to the required eight instances. * **Premium/Isolated Tiers:** While they meet the requirements, they are more expensive than the Standard tier, violating the cost-minimization requirement. * **A Record:** This record type is not used by Azure for domain verification. There is no explicit disagreement in the provided discussion; the user insight clearly supports the suggested answer.
33
**** [View Question](https://www.examtopics.com/discussions/databricks/view/113182-exam-az-104-topic-2-question-83-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com. You have a CSV file that contains the names and email addresses of 500 external users. You need to create a guest user account in contoso.com for each of the 500 external users. Solution: You create a PowerShell script that runs the New-MgInvitation cmdlet for each external user. Does this meet the goal? A. Yes B. No **
** A. Yes The `New-MgInvitation` cmdlet, part of the Microsoft Graph PowerShell module, sends invitations to external users. Upon acceptance of the invitation, a guest user account is created in the Azure AD tenant. A PowerShell script iterating through the CSV and executing this cmdlet for each entry will, therefore, create guest accounts for all 500 external users. **Why other options are incorrect:** B. No is incorrect because the described solution *will* achieve the goal, provided the invitations are accepted by the external users. While the process isn't instantaneous (it requires user action to redeem the invitation), the script itself successfully sends the invitations that lead to account creation. **Note:** There is a discussion point raised that the `New-MgInvitation` cmdlet requires a redirection URL. While technically true, the absence of this parameter in the problem statement doesn't invalidate the core functionality of creating the invitation. The question focuses on whether the *approach* meets the goal, not the completeness of every parameter within the cmdlet. The core function is to invite external users. The lack of the redirection URL parameter might lead to an error or an incomplete user experience, but still successfully creates the invitation to create a user.
34
[View Question](https://www.examtopics.com/discussions/databricks/view/113292-exam-az-104-topic-5-question-126-discussion/) You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure? A. Session persistence to Client IP and protocol B. Idle Time-out (minutes) to 20 C. Session persistence to None D. Floating IP (direct server return) to Enabled
A. Session persistence to Client IP and protocol Explanation: To ensure that each visitor is serviced by the same web server for every request, you need to configure session persistence. Session persistence on the load balancer ensures that incoming requests from the same client IP address and using the same protocol are always directed to the same backend server. Option A directly addresses this requirement. Why other options are incorrect: * **B. Idle Time-out (minutes) to 20:** This setting only defines how long a session remains active before being terminated. It doesn't guarantee that subsequent requests from the same client will go to the same server. * **C. Session persistence to None:** This would disable session persistence, resulting in requests being distributed randomly across the web servers. * **D. Floating IP (direct server return) to Enabled:** Floating IP addresses are associated with individual servers and allow for high availability. However, they don't inherently guarantee session persistence across multiple requests from the same client. Note: The discussion shows a consensus on option A as the correct answer. However, there are some comments that indicate the question's complexity or that it might be similar to other questions. This does not affect the validity of answer A.
35
[View Question](https://www.examtopics.com/discussions/databricks/view/113293-exam-az-104-topic-5-question-129-discussion/) You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. the Publish-AzVMDscConfiguration cmdlet B. a Microsoft Endpoint Manager device configuration profile C. Azure Application Insights D. a Desired State Configuration (DSC) extension
The correct answer is **D. a Desired State Configuration (DSC) extension**. A DSC extension allows you to configure the desired state of your VMs within the scale set, ensuring NGINX is installed and configured correctly on all VMs after deployment. This is a standard and robust method for managing configurations across multiple virtual machines. Why other options are incorrect: * **A. the Publish-AzVMDscConfiguration cmdlet:** While related to DSC, this cmdlet is used to *publish* a DSC configuration, not to deploy it to VMs within a scale set. The DSC extension itself is the mechanism for deployment and management. * **B. a Microsoft Endpoint Manager device configuration profile:** Endpoint Manager is a broader management tool, suitable for diverse configurations and devices, but not the most direct or efficient solution for ensuring NGINX is installed on all VMs within a VM scale set during deployment. * **C. Azure Application Insights:** This service is for application performance monitoring and not directly related to installing and managing software on VMs. **Note:** The discussion shows conflicting answers (A and D), indicating disagreement amongst users about the most appropriate solution. While a DSC extension (D) is generally considered the best approach for this scenario, the context of the specific exam question might provide additional nuances that could influence the choice. The provided link to Microsoft documentation supports the use of the DSC extension as well.
36
**** [View Question](https://www.examtopics.com/discussions/databricks/view/113375-exam-az-104-topic-5-question-128-discussion/) You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure? A. Session persistence to Client IP B. Idle Time-out (minutes) to 20 C. Session persistence to None D. Protocol to UDP **
** A. Session persistence to Client IP **Explanation:** To ensure that visitors are serviced by the same web server for each request, you need to configure session persistence. Session persistence tracks the client's IP address and directs subsequent requests from that IP address to the same server. Option A, "Session persistence to Client IP," directly addresses this requirement. **Why other options are incorrect:** * **B. Idle Time-out (minutes) to 20:** This setting manages how long a connection remains idle before being terminated. It doesn't guarantee that subsequent requests from the same client will go to the same server. * **C. Session persistence to None:** This explicitly disables session persistence, meaning requests will be distributed across the servers without any attempt to maintain client-server affinity. * **D. Protocol to UDP:** This changes the load balancer's protocol. While UDP might be used for some applications, it's not relevant to ensuring that a specific client always connects to the same server. Session persistence is a feature independent of the transport protocol (TCP or UDP). **Note:** The discussion section reveals that this question appears frequently in practice exams, leading to user comments ranging from amusement to frustration. This does not affect the correctness of the answer but highlights the question's prominence in exam preparation materials.
37
[View Question](https://www.examtopics.com/discussions/databricks/view/113376-exam-az-104-topic-5-question-130-discussion/) You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. Azure Custom Script Extension B. Deployment Center in Azure App Service C. the New-AzConfigurationAssignment cmdlet D. a Microsoft Endpoint Manager device configuration profile
A. Azure Custom Script Extension Explanation: Azure Custom Script Extension allows you to run custom scripts during the VM provisioning process. This is ideal for installing software like NGINX on all VMs in a scale set after deployment. The script would download and install NGINX. Why other options are incorrect: * **B. Deployment Center in Azure App Service:** This is used for deploying web applications, not for installing software onto VMs in a VM scale set. * **C. the New-AzConfigurationAssignment cmdlet:** This cmdlet is used for managing Azure Automation Desired State Configuration (DSC), which is a different approach to managing VM configurations. While it could *potentially* be used, it's not the most straightforward or efficient solution for this specific scenario. * **D. a Microsoft Endpoint Manager device configuration profile:** This is for managing configurations on devices, including Windows VMs, but it's a more general-purpose management tool and not the most direct way to install specific software like NGINX during deployment. It is better suited for ongoing management, rather than initial setup during deployment. Note: The discussion section contains mostly informal comments and does not offer alternative solutions or challenge the suggested answer.
38
[View Question](https://www.examtopics.com/discussions/databricks/view/113682-exam-az-104-topic-5-question-127-discussion/) You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure? A. Floating IP (direct server return) to Disabled B. Idle Time-out (minutes) to 20 C. a health probe D. Session persistence to Client IP
D. Session persistence to Client IP Explanation: Session persistence, specifically using Client IP, ensures that requests from the same client IP address are always directed to the same backend server (web server in this case). This maintains session state and ensures a consistent experience for the user. Why other options are incorrect: * **A. Floating IP (direct server return) to Disabled:** Floating IPs are not directly related to maintaining session persistence. Disabling it wouldn't guarantee the same server for each request. * **B. Idle Time-out (minutes) to 20:** Idle timeout manages the connection time between the load balancer and a server. It doesn't guarantee that subsequent requests from the same client will hit the same server. * **C. a health probe:** Health probes check the health of backend servers. While important for load balancing, they don't address the requirement of maintaining session consistency. Note: The discussion section contains comments expressing frustration about duplicate questions on the examtopics website. This does not affect the technical correctness of the answer provided.
39
[View Question](https://www.examtopics.com/discussions/databricks/view/119532-exam-az-104-topic-3-question-77-discussion/) You have an Azure subscription that contains a virtual machine named VM1. To VM1, you plan to add a 1-TB data disk that meets the following requirements: • Provides data resiliency in the event of a datacenter outage. • Provides the lowest latency and the highest performance. • Ensures that no data loss occurs if a host fails. You need to recommend which type of storage and host caching to configure for the new data disk. What storage type and host caching should be used?
The correct answer is Premium SSD with Zone-Redundant Storage (ZRS) and Read-only caching. * **Storage Type:** Premium SSDs offer the lowest latency and highest performance among Azure disk types. ZRS provides data resiliency against datacenter outages by replicating data across availability zones within a region. This satisfies the requirement for data resiliency in a datacenter outage and ensures no data loss if a host fails. * **Host Caching:** Read-only caching is appropriate because it improves read performance without the risk of data corruption associated with read/write caching. Since the primary concern is performance and data resiliency, there is no need to write directly to the cache. Writing to the cache adds latency. The provided discussion indicates a highly-voted answer supporting this solution, however, there may be other valid approaches depending on specific performance and availability needs that aren't explicitly presented in the limited context. Further investigation of Azure storage options might reveal nuanced differences.
40
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119534-exam-az-104-topic-3-question-78-discussion/) You have an Azure virtual machine named VM1 and an Azure key vault named Vault1. On VM1, you plan to configure Azure Disk Encryption to use a key encryption key (KEK). You need to prepare Vault1 for Azure Disk Encryption. Which two actions should you perform on Vault1? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Select Azure Virtual machines for deployment. B. Create a new key. C. Create a new secret. D. Configure a key rotation policy. E. Select Azure Disk Encryption for volume encryption. **
** The correct answer is B and E. To prepare Vault1 for Azure Disk Encryption using a KEK, you must: * **B. Create a new key:** This key will serve as the KEK. Azure Disk Encryption relies on BitLocker (for Windows VMs), which requires a key to encrypt the data disk. The KEK encrypts the BEK (BitLocker Encryption Key). * **E. Select Azure Disk Encryption for volume encryption:** This configures the key vault to properly interact with Azure VMs and their disks for encryption. **Why other options are incorrect:** * **A. Select Azure Virtual machines for deployment:** While Azure Disk Encryption works with Azure VMs, selecting this option within the Key Vault is not directly relevant to preparing the vault for the encryption process itself. The focus is on configuring the key and the vault's relationship to the encryption feature. * **C. Create a new secret:** Secrets are different from keys in Azure Key Vault. A KEK is a cryptographic key, not a secret which might hold other kinds of information. * **D. Configure a key rotation policy:** Although key rotation is a best practice for security, it is not strictly *required* to prepare the vault for Azure Disk Encryption to initially function. The initial setup requires a key, and while rotating that key improves security, it's not part of the initial preparation steps. While the user `rikininetysix` argues for this option, the user `iamchoy` presents a more widely accepted answer focusing on the core requirements for initial setup. Therefore, option E is considered more accurate for the primary steps in preparing the Key Vault. **Note:** There is a disagreement in the provided discussion regarding the necessity of option D. The suggested answer and the more highly upvoted response favor B and E.
41
[View Question](https://www.examtopics.com/discussions/databricks/view/119547-exam-az-104-topic-6-question-51-discussion/) You have an Azure subscription. The subscription contains virtual machines that connect to a virtual network named VNet1. You plan to configure Azure Monitor for VM Insights. You need to ensure that all the virtual machines only communicate with Azure Monitor through VNet1. What should you create first? A. a data collection rule (DCR) B. a Log Analytics workspace C. an Azure Monitor Private Link Scope (AMPLS) D. a private endpoint
C. an Azure Monitor Private Link Scope (AMPLS) The correct answer is C because an Azure Monitor Private Link Scope (AMPLS) must be created before a private endpoint. The AMPLS defines the scope of Azure Monitor resources accessible via private endpoint. Creating the scope is the foundational step to securing communication. A private endpoint then connects VNet1 to the specified Azure Monitor resources within that scope, ensuring all communication remains within VNet1 and avoids the public internet. Why other options are incorrect: * **A. a data collection rule (DCR):** DCRs define *what* data is collected, not *how* it's collected securely. Security via Private Link is addressed prior to configuring data collection. * **B. a Log Analytics workspace:** While necessary for storing the collected data, setting up the secure communication channel (via Private Link) must precede creating the data destination. * **D. a private endpoint:** A private endpoint requires a pre-existing AMPLS to define which resources it connects to. You cannot create a private endpoint without first specifying its target using an AMPLS. Note: The discussion shows consensus on the correct answer.
42
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119556-exam-az-104-topic-6-question-52-discussion/) You have an Azure subscription that contains the vaults shown in the following table. ![Image](https://img.examtopics.com/az-104/image760.png) *(Image shows a table with Vault Name, Resource Type, and Region. Backup1 is listed as Azure Backup, Recovery1 is listed as Recovery Services)* You create a storage account that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image761.png) *(Image shows a table with Resource Name and Resource Type. cont1 is listed as Blob Container, share1 is listed as Azure Files)* To which vault can you back up cont1 and share1? To answer, select the appropriate options in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image762.png) *(Image shows a multiple choice question with options to select Backup1, Recovery1, and other options)* **
** cont1 (Blob container) can be backed up to Backup1 (Azure Backup vault), and share1 (Azure Files) can be backed up to Recovery1 (Recovery Services vault). Therefore, the correct answer is both Backup1 and Recovery1. **Explanation:** The discussion clearly indicates that blob containers (like cont1) are backed up to Azure Backup vaults, and Azure Files (like share1) are backed up to Azure Recovery Services vaults. Multiple users and links in the discussion support this conclusion. **Why other options are incorrect:** The question requires selecting *both* vaults because different resource types require different vault types for backup. Selecting only one would be incomplete. There is no information provided in the question or discussion to suggest other vaults are appropriate. **Note on Disagreement:** While the discussion shows multiple users arriving at the same conclusion, there is no evidence of significant disagreement regarding the correct answer. All highly voted answers point to the same solution.
43
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119575-exam-az-104-topic-3-question-80-discussion/) You have an Azure subscription that contains a storage account named storage1. You need to configure a shared access signature (SAS) to ensure that users can only download blobs securely by name. Which two settings should you configure? To answer, select the appropriate settings in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image711.png) (Image shows a Hotspot question with options for "Allowed services", "Allowed resource types", "Allowed permissions", "Protocol", "IP address or range", "Start time", "Expiry time", "Shared access policy") **
** The two settings required are: * **Allowed resource types:** Object * **Allowed permissions:** Read **Explanation:** To allow users to download blobs *by name*, you must specify the "Object" type under "Allowed resource types". This limits access to individual blobs, rather than the entire container ("Service" option). The "Read" permission under "Allowed permissions" grants the necessary access to download the blob. **Why other options are incorrect:** The other options are not directly relevant to restricting access to only downloading blobs by name. While things like setting specific start and expiry times and potentially IP restrictions improve security, they are not the core requirements for the question's described scenario. "Allowed services" to "Blob" is implied to be pre-selected in the scenario, as it would be nonsensical to create a SAS that allowed actions on anything other than blobs in this context. **Note:** The discussion shows some minor variations in how the problem is approached, but the core solution remains consistent amongst the highly-voted and accepted answers.
44
[View Question](https://www.examtopics.com/discussions/databricks/view/119584-exam-az-104-topic-2-question-90-discussion/) You have an Azure AD tenant that contains the groups shown in the following table. | Group Name | Group Type | Security Enabled | |------------|-----------------------|--------------------| | Group1 | Security group | Yes | | Group2 | Mail-enabled security group | Yes | | Group3 | Microsoft 365 group | Yes | | Group4 | Microsoft 365 group | No | You purchase Azure Active Directory Premium P2 licenses. To which groups can you assign a license? A. Group1 only B. Group1 and Group3 only C. Group3 and Group4 only D. Group1, Group2, and Group3 only E. Group1, Group2, Group3, and Group4
B. Group1 and Group3 only. Azure AD licenses can be assigned to groups to assign licenses to the members of those groups. Licenses can be assigned to security groups and Microsoft 365 groups, but not to mail-enabled security groups or groups that are not security-enabled. * **Group1:** This is a security group and is security-enabled; therefore, licenses can be assigned. * **Group2:** This is a mail-enabled security group. While the initial discussion suggests licenses cannot be assigned to these, a later comment contradicts this, stating that licenses *can* be assigned. There is disagreement on this point in the discussion. * **Group3:** This is a Microsoft 365 group and is security-enabled; therefore, licenses can be assigned. * **Group4:** This is a Microsoft 365 group but is *not* security-enabled; therefore, licenses cannot be assigned. Therefore, only Group1 and Group3 meet the criteria for license assignment. WHY OTHER OPTIONS ARE INCORRECT: * **A:** Omits Group3, which is assignable. * **C:** Includes Group4, which is not assignable. * **D:** Includes Group2, which has conflicting information regarding assignability in the discussion. * **E:** Includes both Group2 and Group4, both of which have issues regarding assignability.
45
[View Question](https://www.examtopics.com/discussions/databricks/view/119592-exam-az-104-topic-2-question-88-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image695.png) You need to assign Workspace1 a role to allow read, write, and delete operations for the data stored in the containers of storage1. Which role should you assign? A. Storage Account Contributor B. Contributor C. Storage Blob Data Contributor D. Reader and Data Access
The correct answer is **C. Storage Blob Data Contributor**. This role specifically grants the permissions to read, write, and delete data within Azure Storage containers and blobs. The question requires permissions limited to the data within the containers, not the management of the storage account itself. Why other options are incorrect: * **A. Storage Account Contributor:** This role provides overly broad permissions, allowing management of the entire storage account, not just the data within containers. It's not a precise fit for the requirement. * **B. Contributor:** This is a general role granting extensive permissions across many Azure resources. It's too broad and doesn't target the specific need for container data manipulation. * **D. Reader and Data Access:** This role lacks the necessary "delete" permission. Note: The discussion shows a strong consensus among users that option C is the correct answer.
46
[View Question](https://www.examtopics.com/discussions/databricks/view/119595-exam-az-104-topic-2-question-92-discussion/) HOTSPOT You have an Azure AD tenant. You need to modify the Default user role permissions settings for the tenant. The solution must meet the following requirements: • Standard users must be prevented from creating new service principals. • Standard users must only be able to use PowerShell or Microsoft Graph to manage their own Azure resources. Which two settings should you modify? To answer, select the appropriate settings in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image699.png)
The two settings to modify are "Register applications" and "Restrict access to Azure AD administration portal". * **Register applications:** Setting this to "No" prevents standard users from creating new service principals, fulfilling the first requirement. * **Restrict access to Azure AD administration portal:** Setting this to "Yes" prevents standard users from using the Azure portal to manage resources. However, it's crucial to understand this *does not* restrict access via PowerShell or Microsoft Graph API, as noted in the discussion. This aligns with the second requirement, allowing standard users to manage their own resources only through these methods. Other Options (Why they are incorrect): The question is a hotspot question, and only the two correct answers are selected from the image. The other options within the provided image are not relevant to fulfilling the stated requirements. The provided discussion only gives details on why the chosen answers are correct; it does not provide information about other potential choices. Note: The discussion highlights some ambiguity around the "Restrict access to Azure AD administration portal" setting. While setting it to "Yes" prevents portal access, it explicitly states that PowerShell and Graph API access remains unaffected. This should be considered when answering the question.
47
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119599-exam-az-104-topic-2-question-93-discussion/) You have an Azure subscription named Sub1 that contains the blob containers shown in the following table. ![Image](https://img.examtopics.com/az-104/image701.png) Sub1 contains two users named User1 and User2. Both users are assigned the Reader role at the Sub1 scope. You have a condition named Condition1 as shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image702.png) You have a condition named Condition2 as shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image703.png) You assign roles to User1 and User2 as shown in the following table. ![Image](https://img.examtopics.com/az-104/image704.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image705.png) **
** NNY (No, No, Yes) * **Statement 1: User1 can read Blob2 - No.** User1 only has Reader access at the subscription level and Condition1 restricts read access to only "cont1". Blob2 is in "cont2", therefore User1 cannot read it. * **Statement 2: User1 can read Blob3 - No.** Similar to statement 1, Condition1 restricts User1's read access to "cont1", and Blob3 resides in "cont2". * **Statement 3: User2 can read Blob1 - Yes.** User2 is assigned the "Storage Blob Data Owner" role on "storage1", which contains "Blob1". This role grants full access, overriding the subscription-level Reader role and Condition1 which only affects reads. **Why other options are incorrect:** The provided answer is derived by interpreting the conditions and user permissions. Any other answer would incorrectly evaluate the impact of the defined conditions and assigned roles on user access to the specified blobs. There is a lack of explicit definition for Condition1 and Condition2 within the images. The user explanation provided in the discussion section interprets these ambiguous conditions. The interpretation provided may be subject to debate without further clarification regarding Condition1 and Condition2.
48
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119601-exam-az-104-topic-5-question-141-discussion/) You have the Azure virtual machines shown in the following table. | VM Name | VNET | DNS Servers | |---|---|---| | VM1 | VNET1 | Azure-provided DNS | | VM2 | VNET2 | 192.168.0.5 | | VM3 | VNET3 | 192.168.0.5 | | VM4 | VNET2 | - | VNET1, VNET2, and VNET3 are peered. VM4 has a DNS server that is authoritative for a zone named contoso.com and contains the following records: | Name | Type | Value | |------------|------|-----------------| | server1 | A | 131.107.2.3 | | server2 | A | 131.107.2.4 | For each of the following statements, select Yes if the statement is true. Otherwise, select No. Statement 1: From VM1, server1.contoso.com resolves to 131.107.2.3. Statement 2: From VM2, server1.contoso.com resolves to 131.107.2.3. Statement 3: From VM3, server2.contoso.com resolves to 131.107.2.4. **
** No, Yes, No * **Statement 1: No.** VM1 uses the Azure-provided DNS, which does not have access to the DNS records on VM4's private DNS zone. Therefore, it will not resolve server1.contoso.com to the IP address provided on VM4's DNS server. * **Statement 2: Yes.** VM2 uses a custom DNS server (192.168.0.5), which is pointed to VM4's DNS server. Since VM4's DNS server is authoritative for contoso.com and contains the record for server1.contoso.com with an IP address of 131.107.2.3, the resolution will be successful. * **Statement 3: No.** Similar to statement 2, VM3 uses the custom DNS server (192.168.0.5) which points to VM4's DNS server. However, the provided table shows that VM4's DNS server maps server2.contoso.com to 131.107.2.4, *not* 131.107.2.4 as stated. There is a discrepancy in the question; the question appears to have a typo listing a different IP than the one shown in the DNS server's records. **Why other options are incorrect:** The provided discussion shows some initial uncertainty but the highly voted response and detailed explanation correctly break down the DNS resolution process based on the information given in the tables. The other responses in the discussion thread either lack a thorough explanation or contain inaccuracies. Note that there's a discrepancy in the question regarding the IP address for `server2.contoso.com`. The provided answer reflects the information available from the images, and points out this discrepancy.
49
[View Question](https://www.examtopics.com/discussions/databricks/view/119607-exam-az-104-topic-5-question-145-discussion/) You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. a Microsoft Intune device configuration profile B. a Desired State Configuration (DSC) extension C. Azure Application Insights D. Deployment Center in Azure App Service
B. a Desired State Configuration (DSC) extension Explanation: A Desired State Configuration (DSC) extension is the most appropriate solution for ensuring NGINX is installed and configured consistently across multiple virtual machines in a VM scale set. DSC allows you to define the desired state of your system (in this case, having NGINX installed and configured) and then automatically configure the VMs to match that state. Why other options are incorrect: * **A. a Microsoft Intune device configuration profile:** Intune is primarily used for managing devices and applications in a mobile device management (MDM) context. It's not ideal for configuring server software on VMs within an Azure infrastructure. * **C. Azure Application Insights:** Application Insights is a monitoring service for applications, not a configuration management tool. * **D. Deployment Center in Azure App Service:** Deployment Center is for deploying web applications to Azure App Service, not for managing the configuration of VMs. Note: The discussion shows some disagreement, with one user suggesting Azure Custom Script Extension as an alternative correct answer. While a custom script *could* achieve this, DSC is generally considered a more robust and manageable solution for configuration management, particularly across multiple VMs. Therefore, while there's a suggestion of another valid approach, B (DSC) remains the more commonly accepted and best-practice answer based on the provided context.
50
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119629-exam-az-104-topic-4-question-96-discussion/) You have an Azure App Service app named App1 that contains two running instances. You have an autoscale rule configured as shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image724.png) For the Instance limits scale condition setting, you set Maximum to 5. During a 30-minute period, App1 uses 80 percent of the available memory. What is the maximum number of instances for App1 during the 30-minute period? A. 2 B. 3 C. 4 D. 5 **
** D. 5 **Explanation:** The question states that the App Service starts with 2 instances and the memory usage remains above 70% for 30 minutes. The autoscale rule triggers when memory usage exceeds 70% for 15 minutes, adding one instance. A 5-minute cooldown period follows each scaling event. Let's break down the 30-minute period: * **Minutes 0-15:** Memory usage exceeds 70%. The autoscale rule adds one instance (2+1=3 instances). * **Minutes 15-20:** Cooldown period. No scaling occurs. * **Minutes 20-25:** Memory usage remains above 70%. The autoscale rule would trigger again, adding another instance (3+1=4 instances). * **Minutes 25-30:** Cooldown period. No scaling occurs. Even though the memory usage continues to exceed 70% for the entire 30 minutes, the cooldown periods prevent additional scaling events beyond the addition of two more instances. The maximum number of instances remains capped at 5, however, only 4 are reached due to the cooldown periods. The maximum *potential* number of instances is 5, according to the defined limits. **Why other options are incorrect:** * **A. 2:** This is the initial number of instances, and scaling occurs due to high memory usage. * **B. 3:** This represents only the first scaling event and does not account for subsequent scaling opportunities within the 30-minute period. * **C. 4:** This is the number of instances reached *after* the second scaling event but before the final cooldown period ends. **Note:** The provided discussion shows some disagreement on the exact interpretation of the autoscale rule and the effect of the cooldown periods. The answer provided reflects a reasonable interpretation of the provided information, considering the interplay between the scaling trigger, the cooldown, and the maximum instance limit.
51
[View Question](https://www.examtopics.com/discussions/databricks/view/119630-exam-az-104-topic-4-question-97-discussion/) You have an Azure subscription that contains the container images shown in the following table. | Image Name | Operating System | |-------------------------|--------------------| | myWebAppImage | Linux | | myWebAppImage_Windows | Windows | | myOtherWebAppImage | Linux | You plan to use the following services: • Azure Container Instances • Azure Container Apps • Azure App Service In which services can you run the images? To answer, select the options in the answer area. NOTE: Each correct answer is worth one point.
The images can run on all three services: Azure Container Instances, Azure Container Apps, and Azure App Service. * **Azure Container Instances:** Explicitly supports both Windows and Linux containers. The provided documentation confirms this capability. * **Azure Container Apps:** Supports any Linux-based x86-64 container image. Therefore, the Linux images ("myWebAppImage" and "myOtherWebAppImage") are compatible. The discussion notes that it does *not* support Windows containers. * **Azure App Service:** Supports both Windows and Linux-based environments. This allows for both the Linux and Windows container images to be deployed. Therefore, all three services are suitable for running the provided container images. The discussion shows overwhelming agreement on this answer.
52
[View Question](https://www.examtopics.com/discussions/databricks/view/119631-exam-az-104-topic-4-question-98-discussion/) You have an Azure AD tenant named contoso.com. You have an Azure subscription that contains an Azure App Service web app named App1 and an Azure key vault named KV1. KV1 contains a wildcard certificate for contoso.com. You have a user named [email protected] that is assigned the Owner role for App1 and KV1. You need to configure App1 to use the wildcard certificate of KV1. What should you do first? A. Create an access policy for KV1 and assign the Microsoft Azure App Service principal to the policy. B. Assign a managed user identity to App1. C. Configure KV1 to use the role-based access control (RBAC) authorization system. D. Create an access policy for KV1 and assign the policy to User1.
B. Assign a managed user identity to App1. To access the certificate in KV1, App1 needs an identity. A managed identity provides this. Once App1 has a managed identity, an access policy can be created in KV1 to grant the App1's identity the necessary permissions to access the certificate. The other options are incorrect because they don't address the fundamental need for App1 to have an identity to authenticate with KV1. Option A is incorrect because while it's a necessary *later* step, App1 first needs an identity to be assigned to the access policy. A service principal is created automatically when a managed identity is assigned. Option C is incorrect because KV1 already uses RBAC; the question is about configuring App1 to access KV1, not changing KV1's authorization system. Option D is incorrect because assigning the policy to a user account (User1) doesn't allow the *application* (App1) to access the Key Vault. Note: There is some discussion around whether a service principal is automatically created when the App Service is registered. The consensus in the provided discussion is that assigning a managed identity is the most reliable and recommended first step.
53
[View Question](https://www.examtopics.com/discussions/databricks/view/119635-exam-az-104-topic-4-question-93-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-104/image713.png) You create an Azure Compute Gallery named ComputeGallery1 as shown in the Azure Compute Gallery exhibit. (Click the Azure Compute Gallery tab.) ![Image](https://img.examtopics.com/az-104/image714.png) In ComputeGallery1, you create a virtual machine image definition named Image1 as shown in the image definition exhibit. (Click the Image Definition tab.) ![Image](https://img.examtopics.com/az-104/image715.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image716.png)
The provided images are necessary to answer this question and are missing from the prompt. The question asks to determine the truthfulness of multiple statements concerning Azure Compute Gallery, image definitions, and virtual machine creation based on the provided context and images. Without the images (which depict the specifics of the virtual machines, Compute Gallery, and Image Definition), a definitive answer cannot be given. The suggested answer image ([Image](https://img.examtopics.com/az-104/image922.png)) would provide the solution, but again is missing. The discussion highlights that several users made assumptions without validation, indicating potential ambiguity or complexity in interpreting the image-based information required to answer correctly.
54
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119636-exam-az-104-topic-3-question-79-discussion/) You have an Azure subscription that contains a virtual machine named VM1 and an Azure key vault named KV1. You need to configure encryption for VM1. The solution must meet the following requirements: • Store and use the encryption key in KV1. • Maintain encryption if VM1 is downloaded from Azure. • Encrypt both the operating system disk and the data disks. Which encryption method should you use? A. customer-managed keys B. Confidential disk encryption C. Azure Disk Encryption D. encryption at host **
** C. Azure Disk Encryption Azure Disk Encryption is the correct answer because it allows you to encrypt both the operating system and data disks of a VM using keys stored in Azure Key Vault. This ensures that the encryption persists even if the VM is downloaded from Azure. The use of Key Vault fulfills the requirement of storing and using the encryption key in KV1. **Why other options are incorrect:** * **A. customer-managed keys:** While customer-managed keys can be *used* with Azure Disk Encryption, the question asks for the *encryption method* itself. Customer-managed keys are a key management option, not an encryption method. * **B. Confidential disk encryption:** This is not a standard Azure service for VM encryption. The term "Confidential Computing" is related, but refers to a different set of technologies for protecting data in use, not at rest. * **D. encryption at host:** This is too general and doesn't specify a mechanism that guarantees encryption persistence when the VM is downloaded. It also doesn't inherently utilize a key vault. Note: The provided discussion supports the choice of Azure Disk Encryption as the solution. There's no explicit disagreement within the provided content.
55
[View Question](https://www.examtopics.com/discussions/databricks/view/119641-exam-az-104-topic-5-question-136-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. | Virtual Network | Region | |---|---| | VNet1 | East US | | VNet2 | West US | | VNet3 | North Europe | All the virtual networks are peered. Each virtual network contains nine virtual machines. You need to configure secure RDP connections to the virtual machines by using Azure Bastion. What is the minimum number of Bastion hosts required? A. 1 B. 3 C. 9 D. 10
A. 1 Explanation: Because all virtual networks are peered, a single Azure Bastion host in any of the virtual networks can be used to securely connect to virtual machines in all other peered virtual networks. There's no need for a separate Bastion host for each VNet. The Azure Bastion service supports connecting to VMs across peered VNets, including globally peered VNets. Why other options are incorrect: * **B. 3:** This assumes a Bastion host is needed for each VNet, which is incorrect due to VNet peering. * **C. 9:** This is the total number of VMs per VNet, not the number of Bastion hosts needed. * **D. 10:** This is incorrect; the number of VMs or VNets does not dictate the number of Bastion hosts needed when VNet peering is used effectively. Note: While the provided discussion strongly supports answer A, considerations about high availability and regional redundancy might influence real-world deployment decisions. The provided documentation might not cover all edge cases or advanced scenarios.
56
[View Question](https://www.examtopics.com/discussions/databricks/view/119651-exam-az-104-topic-2-question-89-discussion/) You have an Azure subscription named Subscription1 that contains a virtual network named VNet1. VNet1 is in a resource group named RG1. A user named User1 has the following roles for Subscription1: • Reader • Security Admin • Security Reader You need to ensure that User1 can assign the Reader role for VNet1 to other users. What should you do? A. Remove User1 from the Security Reader and Reader roles for Subscription1. Assign User1 the Contributor role for Subscription1. B. Assign User1 the Contributor role for VNet1. C. Assign User1 the Owner role for VNet1. D. Assign User1 the Network Contributor role for RG1.
The correct answer is **C. Assign User1 the Owner role for VNet1.** The Owner role provides User1 with full control over VNet1, including the ability to assign roles to other users. Options A, B, and D do not grant the necessary permissions. The Contributor role (options A and B) allows for management of resources but not necessarily the assignment of roles to other users. The Network Contributor role (option D) is limited to network resources within the resource group but doesn't provide the full control needed to assign roles. The discussion highlights that this question appears repeatedly in the exam dump, with several users commenting on its frequency. While the consensus points to C as the correct answer, no alternative explanations are given for why the other options are incorrect beyond the general lack of necessary permissions.
57
[View Question](https://www.examtopics.com/discussions/databricks/view/119655-exam-az-104-topic-3-question-76-discussion/) You have an Azure subscription named Subscription1. You have 5 TB of data that you need to transfer to Subscription1. You plan to use an Azure Import/Export job. What can you use as the destination of the imported data? A. Azure Data Factory B. the Azure File Sync Storage Sync Service C. Azure File Storage D. Azure SQL Database
C. Azure File Storage Azure Import/Export service is designed to transfer large amounts of data to Azure Blob storage, Azure Files, and Azure disks. Among the options provided, only Azure File Storage is a supported destination for Azure Import/Export jobs. A. Azure Data Factory is a data integration service, not a storage service for importing data via the Azure Import/Export job. B. Azure File Sync Storage Sync Service synchronizes files between on-premises storage and cloud storage; it's not directly involved in the initial import process with the Azure Import/Export job. D. Azure SQL Database is a relational database service. It's not a destination for the Azure Import/Export service. Data would need to be imported into Azure File Storage first and then processed to move into the SQL database. Note: The discussion section highlights a significant degree of repetition of this question on the ExamTopics website. This is indicated by many comments expressing frustration about the question appearing repeatedly across different question sets.
58
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119667-exam-az-104-topic-5-question-135-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-104/image728.png) The subnets have the IP address spaces shown in the following table. ![Image](https://img.examtopics.com/az-104/image729.png) You plan to create a container app named contapp1 in the East US Azure region. You need to create a container app environment named con-env1 that meets the following requirements: • Uses its own virtual network. • Uses its own subnet. • Is connected to the smallest possible subnet. To which virtual networks can you connect con-env1, and which subnet mask should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image730.png) **
** The correct answer is VNET 3 with a /23 subnet mask. **Explanation:** The question requires the smallest possible subnet that can accommodate a container app environment. A /23 subnet provides 512 IP addresses. While a /24 subnet (256 IP addresses) might seem smaller, it's insufficient because Container Apps require a minimum of 60 IPs for infrastructure, and this number can increase up to 256 as the app scales. Therefore, only a /23 subnet offers enough available IP addresses to meet the minimum requirement and allow for scaling. VNET 1 and VNET 2 are unsuitable because their existing subnets consume too much of their IP space, leaving insufficient addresses for a new /23 subnet to be created. VNET 3 has enough free IP space to create a new /23 subnet. **Why other options are incorrect:** * **VNET 1:** The existing subnet (/24) leaves too few IP addresses. Creating a /23 subnet isn't feasible given the existing subnet's address space. * **VNET 2:** The existing subnets use the entire IP address space of the virtual network. There is no room to create a new subnet. The discussion shows a slight disagreement on the exact calculation of available IP space. However, the consensus points towards VNET 3 and a /23 subnet mask as the only feasible solution based on the requirement for a minimum of 60 IPs plus potential scaling needs.
59
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119668-exam-az-104-topic-5-question-137-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-104/image733.png) The subscription contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-104/image734.png) Each virtual machine contains only a private IP address. You create an Azure bastion for VNet1 as shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image735.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image736.png) **
** The correct answer is likely Y-Y-N, though there is some disagreement in the provided discussion. * **Statement 1: You can use the mstsc.exe command to connect to VM1 from your local computer through the Azure Bastion.** The answer is **No**. While mstsc.exe is the RDP client, Azure Bastion, in its basic configuration, uses a web-based RDP connection via a browser (HTML5) over port 443. Native client support via mstsc.exe requires the Standard SKU of Azure Bastion and additional configuration. * **Statement 2: You can use the Azure portal to connect to VM1 from your local computer through the Azure Bastion.** The answer is **Yes**. The Azure portal provides a connection method to VMs through Azure Bastion, regardless of SKU. This is supported by documentation and user comments. * **Statement 3: You can connect to VM2 from VM1 through the Azure Bastion.** The answer is **No**. Azure Bastion connections are established from external networks to VMs within the bastion's VNet. There is no peering implied or established between VM1 and VM2 that would allow a connection through the bastion. **Why other options are incorrect:** The discussion shows some disagreement, primarily focused on the first statement regarding mstsc.exe and the Azure Bastion SKU. Some users incorrectly associate mstsc.exe solely with the Standard SKU, while others correctly point out that Azure Bastion primarily uses a browser-based RDP connection. However, the consensus is that a direct RDP connection via mstsc.exe is not a default supported method. The core of the disagreement lies in whether the connection via the Azure portal qualifies as using the "native client" despite it being browser-based. The provided documentation seems to imply that this is not the native client described in the question.
60
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119703-exam-az-104-topic-4-question-94-discussion/) You plan to create the Azure web apps shown in the following table. ![Image](https://img.examtopics.com/az-104/image718.png) *(Image text not provided in original prompt)* What is the minimum number of App Service plans you should create for the web apps? A. 1 B. 2 C. 3 D. 4 **
** B. 2 The table (image) shows that you need to deploy two different types of web apps: one using ASP.NET and another using Python. ASP.NET applications require a Windows-based App Service Plan, while Python applications can run on Linux (though it *can* be run on Windows with custom container images). To avoid the complexities of custom containers and ensure compatibility, the minimum number of App Service Plans required is two: one for the ASP.NET app (Windows) and one for the Python app (Linux). **Why other options are incorrect:** * **A. 1:** This is incorrect because ASP.NET and Python apps typically have different operating system requirements (Windows and Linux, respectively). Running them in a single App Service plan would not be directly supported. While custom containers *could* resolve this, the question asks for the *minimum* number of plans needed. Using custom containers introduces additional complexity, therefore it's not the most straightforward solution. * **C. 3 and D. 4:** These options are unnecessarily high. Two App Service Plans are sufficient to meet the requirements, one for each type of web app. **Note:** There is some disagreement in the discussion regarding whether Python can run on Windows using custom container images. The answer above reflects the most straightforward and practical approach to fulfil the requirements without excessive complexity. Using custom containers is mentioned as a possibility but not the most efficient method.
61
[View Question](https://www.examtopics.com/discussions/databricks/view/119707-exam-az-104-topic-2-question-86-discussion/) HOTSPOT You have an Azure Storage account named storage1 that uses Azure Blob storage and Azure File storage. You need to use AzCopy to copy data to the blob storage and file storage in storage1. Which authentication method should you use for each type of storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image691.png) (This image is not provided, but the question indicates a hotspot requiring selection of authentication methods for Blob and File storage.) ![Image](https://img.examtopics.com/az-104/image692.png) (This image is not provided, but it represents the suggested answer).
For Azure Blob storage, Azure AD (Microsoft Entra ID) is a suitable authentication method. For Azure File storage, a Shared Access Signature (SAS) token is a suitable authentication method. Explanation: The suggested answer and the upvoted response in the discussion both indicate that Azure AD (or Microsoft Entra ID, its newer name) is appropriate for Blob storage and SAS for File storage. While the provided link mentions both methods are possible for both services, the suggested answer and community consensus points to this specific combination as the best practice or most likely exam answer. Why other options are incorrect: The question doesn't provide other options, but implicitly, using only SAS for both or only Azure AD for both would be incorrect based on the suggested answer and the discussion. The discussion highlights that while both Azure AD and SAS are *possible* for both services, the most appropriate/recommended combination remains unclear from the provided text alone. There might be scenarios where other methods are preferable, but based solely on the provided information, the above is the recommended approach.
62
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119710-exam-az-104-topic-4-question-95-discussion/) You have an Azure subscription that contains the resource groups shown in the following table: | Resource Group | Location | |---|---| | RG0 | | | RG1 | West US | | RG2 | East US | | RG3 | | You create the following Azure Resource Manager (ARM) template named `deploy.json`: ```json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "location": { "type": "string", "defaultValue": "eastus", "metadata": { "description": "Location for the resource group" } } }, "variables": {}, "resources": [ { "type": "Microsoft.Resources/resourceGroups", "apiVersion": "2019-10-01", "name": "RG0", "location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.Resources/resourceGroups", "apiVersion": "2019-10-01", "name": "RG3", "location": "[parameters('location')]", "properties": {} } ] } ``` You deploy the template by running the following cmdlet: ```powershell New-AzResourceGroupDeployment -ResourceGroupName MyResourceGroup -TemplateFile .\deploy.json ``` For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. Resource group RG0 will be created in the eastus location. 2. Resource groups RG0, RG1, RG2, and RG3 will be created. 3. Resource group RG3 will be created in the westus location. **
** 1. **Yes**. The ARM template explicitly sets the location for RG0 to `[parameters('location')]`, which defaults to "eastus". Therefore, RG0 will be created in the eastus location. 2. **No**. Only RG0 and RG3 will be created. RG1 and RG2 already exist in the subscription, as shown in the table. The ARM template will not attempt to recreate existing resource groups. 3. **No**. Similar to statement 1, the ARM template sets the location for RG3 to `[parameters('location')]`, defaulting to "eastus". **Explanation of why other options are incorrect:** The incorrect options stem from a misunderstanding of how Azure Resource Manager handles existing resources and parameterization within ARM templates. The template only creates new resource groups; it does not modify or recreate existing ones. The location parameter is clearly defined and applies to both RG0 and RG3, resulting in them both being in 'eastus'. **Note:** The provided solution reflects a consensus interpretation of the question and the given information. There is no explicit disagreement presented within the discussion beyond a clear confirmation of the answer.
63
[View Question](https://www.examtopics.com/discussions/databricks/view/119715-exam-az-104-topic-5-question-142-discussion/) DRAG DROP You have an Azure subscription that contains a resource group named RG1. You plan to create an Azure Resource Manager (ARM) template to deploy a new virtual machine named VM1. VM1 must support the capture of performance data. You need to specify resource dependencies for the ARM template. In which order should you deploy the resources? To answer, move all resources from the list of resources to the answer area and arrange them in the correct order. ![Image](https://img.examtopics.com/az-104/image748.png) (Image shows: Virtual Network, Network Interface, Virtual Machine, Monitor Extension)
The correct order for deploying the resources is: 1. **Virtual Network:** The virtual network must exist before any virtual machines or network interfaces can be created within it. 2. **Network Interface:** The network interface (NIC) needs to be created and associated with the virtual network to provide network connectivity for the VM. 3. **Virtual Machine:** The virtual machine can only be created once the virtual network and network interface are in place. 4. **Monitor Extension:** Finally, the Azure Monitor extension is installed on the VM to enable performance data capture. The consensus among the discussion responses supports this order. There is no significant disagreement regarding the sequence of these resource deployments. While some responses might slightly alter the wording, the core order remains consistent.
64
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119716-exam-az-104-topic-5-question-144-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-104/image750.png) The subscription contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-104/image751.png) All the virtual machines have only private IP addresses. You deploy an Azure Bastion host named Bastion1 to VNet1. To which virtual machines can you connect through Bastion1? A. VM1 only B. VM1 and VM2 only C. VM1 and VM3 only D. VM1, VM2, and VM3 **
** B. VM1 and VM2 only **Explanation:** Azure Bastion allows connection to VMs within the same VNet or VNets that are peered with the VNet hosting the Bastion. VNet1 (where Bastion1 resides) is peered with VNet2. VM1 is in VNet1 and VM2 is in VNet2. Therefore, you can connect to both VM1 and VM2 through Bastion1. VM3 is in VNet3, which is not peered with VNet1, preventing a connection via Bastion1. **Why other options are incorrect:** * **A. VM1 only:** This is incorrect because Bastion1 can also reach VMs in peered VNets. * **C. VM1 and VM3 only:** This is incorrect because VNet3 is not peered with VNet1. * **D. VM1, VM2, and VM3:** This is incorrect because VNet3 is not peered with VNet1. **Note:** There is disagreement in the discussion regarding whether Azure Bastion can traverse globally peered VNets. The most widely accepted and likely correct answer, based on the provided information, is that it only supports peering within the same region. Therefore, the answer assumes regional peering.
65
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119724-exam-az-104-topic-2-question-87-discussion/) HOTSPOT - You have an Azure AD tenant that contains a user named External User. External User authenticates to the tenant by using [email protected] . You need to ensure that External User authenticates to the tenant by using [email protected] . Which two settings should you configure from the Overview blade? To answer, select the appropriate settings in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image693.png) *(Image shows an Azure AD user overview blade with options like "Edit properties", "Reset redemption" under a "B2B collaboration" tile, and other options.)* ![Image](https://img.examtopics.com/az-104/image694.png) *(Image shows the suggested answer with "Edit properties" and "Reset redemption" selected.)* **
** The correct answer is to select "Edit properties" and "Reset redemption" (under the B2B collaboration tile) from the Overview blade. * **Edit properties:** This allows modifying the user's contact information, including adding or changing their email address to [email protected]. * **Reset redemption:** This is relevant for B2B users. Resetting the redemption status can help resolve authentication issues related to the user's invitation or login. This option might be necessary if the user has not yet accepted the B2B invitation with their preferred email address or if there was an issue with the invitation process. The discussion shows some conflicting information regarding the location of the "Reset redemption" setting in the UI over time, and some users have indicated that the "Identities" option is unavailable. Therefore, the "Edit properties" and "Reset redemption" options provide the most universally applicable solution based on the information available. **WHY OTHER OPTIONS ARE INCORRECT:** The question specifically asks what should be configured from the *Overview* blade. Other settings might exist to manage user accounts, but are not specified in the question or highlighted in the images. The discussion shows that access to the "Identities" section (where the user's email might be changed directly) is not always available.
66
[View Question](https://www.examtopics.com/discussions/databricks/view/119739-exam-az-104-topic-2-question-84-discussion/) You have an Azure subscription named Subscription1 that contains a virtual network named VNet1. VNet1 is in a resource group named RG1. A user named User1 has the following roles for Subscription1: • Reader • Security Admin • Security Reader You need to ensure that User1 can assign the Reader role for VNet1 to other users. What should you do? A. Assign User1 the Contributor role for VNet1. B. Assign User1 the Network Contributor role for VNet1. C. Assign User1 the User Access Administrator role for VNet1. D. Remove User1 from the Security Reader and Reader roles for Subscription1. Assign User1 the Contributor role for Subscription1.
C. Assign User1 the User Access Administrator role for VNet1. The User Access Administrator role allows a user to manage access to resources within a specific scope (in this case, VNet1). This includes assigning roles to other users. Therefore, assigning User1 this role directly addresses the requirement of allowing them to assign the Reader role to other users within VNet1. Why other options are incorrect: * **A. Assign User1 the Contributor role for VNet1:** The Contributor role grants broad permissions to manage resources, but it doesn't explicitly include the ability to *assign* roles to other users. While a Contributor *could* potentially achieve the desired outcome indirectly through other actions, it's not the most direct or efficient solution. * **B. Assign User1 the Network Contributor role for VNet1:** Similar to the Contributor role, the Network Contributor role allows extensive management of network resources but lacks the explicit permission to assign roles. * **D. Remove User1 from the Security Reader and Reader roles for Subscription1. Assign User1 the Contributor role for Subscription1:** This option is overly broad. Assigning the Contributor role at the *subscription* level grants User1 extensive control over *all* resources within Subscription1, which is unnecessary and potentially risky. The goal is to grant only the necessary permission at the VNet1 level. Note: The discussion strongly suggests a consensus that option C is the correct answer. There is no significant disagreement expressed.
67
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119740-exam-az-104-topic-2-question-85-discussion/) You have an Azure subscription named Subscription1 that contains a virtual network named VNet1. VNet1 is in a resource group named RG1. User named User1 has the following roles for Subscription1: • Reader • Security Admin • Security Reader You need to ensure that User1 can assign the Reader role for VNet1 to other users. What should you do? A. Remove User1 from the Security Reader and Reader roles for Subscription1. Assign User1 the Contributor role for Subscription1. B. Remove User1 from the Security Reader role for Subscription1. Assign User1 the Contributor role for RG1. C. Assign User1 the Network Contributor role for VNet1. D. Assign User1 the User Access Administrator role for VNet1. **
** D. Assign User1 the User Access Administrator role for VNet1. The User Access Administrator role at the VNet1 scope grants User1 the necessary permissions to manage access control for that specific virtual network, including assigning the Reader role to other users. The discussion shows disagreement on the correct answer, with some suggesting option C (Network Contributor). However, the prevailing and most strongly supported answer is D, which directly addresses the requirement of assigning roles. The Contributor role (options A and B) does *not* include the ability to assign roles, only to manage resources. The Network Contributor role (option C), while granting management of networking resources, may not explicitly include the ability to assign roles at the necessary level of granularity. **Why other options are incorrect:** * **A:** Removing roles and assigning the Contributor role at the subscription level is too broad and doesn't specifically address the need to assign roles for *only* VNet1. The Contributor role doesn't grant the permission to assign roles to others. * **B:** Similar to A, assigning the Contributor role at the resource group level is too broad and doesn't specifically target VNet1. Again, the Contributor role itself lacks the necessary permission. * **C:** While the Network Contributor role allows management of networking resources within VNet1, it's not definitively proven to include the ability to assign RBAC roles to other users. The User Access Administrator role explicitly provides this capability. This option reflects a point of disagreement in the discussion.
68
[View Question](https://www.examtopics.com/discussions/databricks/view/119745-exam-az-104-topic-2-question-91-discussion/) HOTSPOT You have an Azure AD tenant. You need to create a Microsoft 365 group that contains only members of a marketing department in France. How should you complete the dynamic membership rule? To answer, select the appropriate options in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image697.png) (Image shows a rule builder with blanks to be filled in for a dynamic group membership rule). ![Image](https://img.examtopics.com/az-104/image698.png) (Image shows the suggested answer with `(user.department -eq "Marketing") -and (user.country -eq "France")` filled in.)
The correct dynamic membership rule is `(user.department -eq "Marketing") -and (user.country -eq "France")`. This rule accurately filters for users whose department attribute is "Marketing" AND whose country attribute is "France". The use of `-eq` is the correct operator for "equals" in Azure AD dynamic group rules. The parentheses, while not strictly required due to operator precedence in this simple case (as confirmed in the provided Microsoft documentation link), improve readability and explicitly define the order of operations. Several users in the discussion suggest variations or additions, such as using `-match` instead of `-eq`. While `-match` could work if the attribute values might contain variations, `-eq` is sufficient and more precise given the question's requirements. The core consensus points to the primary answer above being correct. There is no significant disagreement about the core logic, only minor variations in style and the use of parentheses (which is largely a matter of style and clarity).
69
[View Question](https://www.examtopics.com/discussions/databricks/view/119784-exam-az-104-topic-5-question-140-discussion/) You have an Azure subscription that contains a resource group named RG1 and a virtual network named VNet1. You plan to create an Azure container instance named container1. You need to be able to configure DNS name label scope reuse for container1. What should you configure for container1? A. the private networking type B. the public networking type C. a new subnet on VNet1 D. a confidential SKU
B. the public networking type Explanation: To configure DNS name label scope reuse for container1, you need to use the public networking type. This is because public networking allows you to assign a globally unique DNS name label within Azure, making the container instance accessible from the internet. Private networking, on the other hand, operates within a virtual network and does not provide this global DNS name label reuse capability. Creating a new subnet or using a confidential SKU are unrelated to DNS name label scope reuse. Why other options are incorrect: * **A. the private networking type:** Private networking restricts access to the container instance to only internal resources within the virtual network. This prevents the global DNS name label reuse needed. * **C. a new subnet on VNet1:** Subnets are part of network configuration, but don't directly control DNS name label scope reuse for container instances. * **D. a confidential SKU:** Confidential SKUs relate to the security and encryption features of the container instance and are not related to DNS name label scope reuse. Note: The provided discussion shows a consensus that option B is the correct answer.
70
[View Question](https://www.examtopics.com/discussions/databricks/view/119829-exam-az-104-topic-3-question-75-discussion/) HOTSPOT You have an Azure subscription. You plan to create a role definition to meet the following requirements: • Users must be able to view the configuration data of a storage account. • Users must be able to perform all actions on a virtual network. • The solution must use the principle of least privilege. What should you include in the role definition for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image707.png) *(Image shows a hotspot question with options for selecting permissions related to Storage Account Contributor, Storage Account Reader, Network Contributor, and Network Administrator)*
The correct answer is to select "Storage Account Reader" for viewing storage account configuration data and "Network Contributor" for performing all actions on a virtual network. * **Storage Account Reader:** This role provides only read access to storage account configuration data, adhering to the principle of least privilege. Granting more extensive permissions (like Storage Account Contributor) would be unnecessary and a security risk. * **Network Contributor:** This role allows for full management of virtual networks, fulfilling the requirement of performing all actions. While Network Administrator has even more permissions, Network Contributor is sufficient and adheres to the principle of least privilege. Other options are incorrect because they either grant excessive permissions (violating the least privilege principle) or insufficient permissions to satisfy the requirements. The provided discussion and suggested answers strongly support this solution. There's no indication of disagreement regarding the correct answer in the provided discussion text.
71
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119843-exam-az-104-topic-6-question-49-discussion/) You create a Recovery Services vault backup policy named Policy1 as shown in the following exhibit: (Image depicting a backup policy with yearly and monthly backup points and retention periods) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. (Image showing two dropdowns: "Box 1: Retention period for yearly backup points:" and "Box 2: Retention period for monthly backup points:") **
** Box 1: 10 years Box 2: 36 months The yearly backup point has a retention period of 10 years, and the monthly backup point has a retention period of 36 months. The question asks for the retention period of each backup point type based on the provided policy image. The image (although not included directly in this text) clearly shows these values. The discussion confirms these values are correct. **Why other options are incorrect:** The problem does not provide other options; the answer is derived directly from interpreting the image of the backup policy (which is not present here but described in the question). Any other answer would be incorrect because it would not match the values explicitly shown in the policy. **Note:** The discussion mentions that Azure takes the longest retention period in case of conflicting policies. However, this is not relevant to this specific question, as the question only deals with a single policy's settings.
72
**** [View Question](https://www.examtopics.com/discussions/databricks/view/119956-exam-az-104-topic-5-question-138-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. | Virtual Network | Region | | --------------- | --------------- | | VNet1 | South Central US | | VNet2 | Southeast Asia | | VNet3 | South Central US | The subscription contains the subnets shown in the following table. | Subnet Name | Virtual Network | Region | | ----------- | --------------- | --------------- | | Subnet1 | VNet1 | South Central US | | Subnet2 | VNet2 | Southeast Asia | | Subnet3 | VNet3 | South Central US | The subscription contains the storage accounts shown in the following table. | Storage Account | Region | | --------------- | --------------- | | Storage1 | South Central US | | Storage2 | South Central US | | Storage3 | Southeast Asia | You create a service endpoint policy named Policy1 in the South Central US Azure region to allow connectivity to all the storage accounts in the subscription. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. | Statement | Yes/No | | -------------------------------------------------------------------------------------------------------------- | ------- | | Subnet1 can access Storage1 and Storage2 through Policy1. | | | Subnet2 can access Storage1 and Storage2 through Policy1. | | | Subnet3 can access Storage1, Storage2, and Storage3 through Policy1. | | **
** Y N N 1. **Yes:** Subnet1 can access Storage1 and Storage2 through Policy1. Subnet1 and Policy1 are in the same region (South Central US), and the policy allows access to all storage accounts. 2. **No:** Subnet2 cannot access Storage1 and Storage2 through Policy1. Subnet2 is in the Southeast Asia region, while Policy1 is in the South Central US region. Service Endpoint Policies are region-specific; a policy in one region doesn't affect subnets in other regions. 3. **No:** Subnet3 can access Storage1 and Storage2 through Policy1, but not Storage3. While Subnet3 and Policy1 are in the same region, the statement claims access to *all* three storage accounts. Only Storage1 and Storage2 are accessible via Policy1 from Subnet3 because Storage3 is in a different region. **Why other options are incorrect:** The discussion shows a consensus among users on the correct answer. The reasoning provided highlights the regional limitations of service endpoint policies, a key aspect of Azure networking. The incorrect options fail to account for the regional restrictions and the fact that the policy, while allowing access to *all* storage accounts *within its region*, cannot reach storage accounts in other regions.
73
[View Question](https://www.examtopics.com/discussions/databricks/view/120083-exam-az-104-topic-6-question-53-discussion/) You have an Azure subscription that contains an Azure Stream Analytics job named Job1. You need to monitor input events for Job1 to identify the number of events that were NOT processed. Which metric should you use? A. Out-of-Order Events B. Output Events C. Late Input Events D. Backlogged Input Events
The correct answer is **D. Backlogged Input Events**. The "Backlogged Input Events" metric represents the number of input events that haven't been processed yet by the Stream Analytics job. A non-zero value indicates that the job is falling behind the incoming event rate. This directly addresses the question's requirement to identify unprocessed events. Why other options are incorrect: * **A. Out-of-Order Events:** This metric counts events received out of order, but it doesn't necessarily mean these events weren't processed. They might have been processed after their timestamp was adjusted. * **B. Output Events:** This metric tracks the number of events *successfully* processed and sent to the output, not the number of unprocessed events. * **C. Late Input Events:** This metric focuses on events that arrived later than the configured tolerance window. While these events might contribute to backlogging, it doesn't capture all unprocessed events. Some unprocessed events might not be "late," simply because the job can't keep up with the input rate. Note: The provided discussion shows unanimous agreement on the correct answer.
74
**** [View Question](https://www.examtopics.com/discussions/databricks/view/120224-exam-az-104-topic-6-question-50-discussion/) You have an Azure subscription that contains the vaults shown in the following table. ![Image](https://img.examtopics.com/az-104/image755.png) You deploy the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-104/image756.png) You have the backup policies shown in the following table. ![Image](https://img.examtopics.com/az-104/image757.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image758.png) **
** The correct answers, based on the provided information and the discussion, are: A: Yes B: No C: No **Explanation:** * **A:** VM1 is backed up by Policy1, which is associated with Vault1. This statement is true. * **B:** The statement implies that Azure Backup vaults support backing up Azure Virtual Machines. This is generally incorrect. While Azure Backup *can* back up *disks* associated with VMs, it doesn't directly back up the VMs themselves as a single unit in the way that other backup solutions might. The discussion by M_khuja supports this. There is some ambiguity and potential for confusion because disks are backed up. * **C:** Policy 2 targets "Recovery2," but there is no Recovery2 vault listed in the provided tables. Therefore, this statement is false. This point was heavily debated in the discussion, with users highlighting the absence of Recovery2. **Why other options are incorrect:** The discussion clearly points out the non-existence of Recovery2, making option C incorrect. The discussion regarding B highlights the nuance between backing up VM disks and directly backing up the VMs themselves, leading to the conclusion that the statement is not generally true in the way it's presented. **Note:** There is some disagreement in the discussion regarding the subtle points of Azure Backup functionality and whether the wording in option B accurately reflects the capabilities. The answer provided here reflects the most widely accepted and logically sound interpretation based on the information and the discussion.
75
**** [View Question](https://www.examtopics.com/discussions/databricks/view/120891-exam-az-104-topic-5-question-143-discussion/) You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. a Desired State Configuration (DSC) extension B. a Microsoft Intune device configuration profile C. the Publish-AzVMDscConfiguration cmdlet D. the New-AzConfigurationAssignment cmdlet **
** A. a Desired State Configuration (DSC) extension A Desired State Configuration (DSC) extension is the best option for ensuring NGINX is available on all virtual machines in a virtual machine scale set after deployment. DSC allows you to define the desired state of your VMs (in this case, having NGINX installed and configured) and it will automatically manage and enforce that state across all the VMs. This is ideal for consistency and automation in large-scale deployments. **Why other options are incorrect:** * **B. a Microsoft Intune device configuration profile:** Intune is primarily for managing devices in a corporate environment, often focusing on endpoint protection and configuration policies for various devices, including laptops and mobile devices. While it might *indirectly* influence VM configurations in some enterprise scenarios, it's not the direct and efficient approach for ensuring NGINX installation on a VM scale set. * **C. the Publish-AzVMDscConfiguration cmdlet:** This cmdlet is part of the DSC process but isn't the solution itself. It's a tool used *within* a DSC deployment, not the overarching method for achieving the desired state. * **D. the New-AzConfigurationAssignment cmdlet:** Similar to option C, this cmdlet is also related to managing DSC configurations, but it's not the overall approach to installing and ensuring NGINX is present on the VMs within the scale set. **Note:** The discussion shows a consensus on answer A being correct. However, one user mentions "session persistence", which suggests a possibly overlooked aspect of the problem's context. Further details about the desired NGINX configuration (e.g., specific settings or functionality beyond basic installation) might influence the most appropriate solution.
76
[View Question](https://www.examtopics.com/discussions/databricks/view/121819-exam-az-104-topic-5-question-139-discussion/) You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. the New-AzConfigurationAssignment cmdlet B. Azure Application Insights C. the Publish-AzVMDscConfiguration cmdlet D. a Desired State Configuration (DSC) extension
D. a Desired State Configuration (DSC) extension A Desired State Configuration (DSC) extension is the most appropriate solution for ensuring NGINX is available on all virtual machines in a virtual machine scale set after deployment. DSC allows you to define the desired state of your configuration (in this case, having NGINX installed and running) and automatically applies it to the VMs. The extension handles the installation and configuration of NGINX consistently across all VMs. Why other options are incorrect: * **A. the New-AzConfigurationAssignment cmdlet:** This cmdlet is related to Azure Automation and managing configurations, but it's not the primary or most efficient method for installing and configuring software on multiple VMs within a scale set. It's a less direct approach compared to using a DSC extension. * **B. Azure Application Insights:** Application Insights is a monitoring service for applications. It doesn't directly manage the installation or configuration of software on VMs. * **C. the Publish-AzVMDscConfiguration cmdlet:** While related to DSC, this cmdlet publishes a DSC configuration, but it doesn't automatically apply it to VMs within a scale set. A DSC *extension* is necessary for this automated application and management within the scale set. Note: The discussion shows a general consensus on the correct answer (D), but there's no substantial debate or conflicting opinions presented.
77
[View Question](https://www.examtopics.com/discussions/databricks/view/121826-exam-az-104-topic-6-question-55-discussion/) You have an Azure subscription. The subscription contains virtual machines that run Windows Server. You have a data collection rule (DCR) named Rule1. You plan to use the Azure Monitor Agent to collect events from Windows System event logs. You only need to collect system events that have an ID of 1001. Which type of query should you use for the data source in Rule1? A. SQL B. XPath C. KQL
The correct answer is B. XPath. To efficiently collect only the necessary Windows System event logs (ID 1001) when configuring Data Collection Rule (DCR) Rule1, an XPath query should be used within the data source configuration. This prevents unnecessary ingestion of all events into Log Analytics, thus reducing costs. Using KQL (Kusto Query Language) for filtering within Log Analytics would require ingesting all events first, making it less efficient and more expensive. SQL is not applicable in this context. There is some discussion regarding the unexpected use of XPath instead of the more commonly used KQL for event filtering in Azure Monitor. Some users find the requirement to use XPath in this specific scenario counterintuitive.
78
[View Question](https://www.examtopics.com/discussions/databricks/view/121828-exam-az-104-topic-6-question-56-discussion/) You have an Azure subscription that contains a virtual machine named VM1. You have an on-premises datacenter that contains a domain controller named DC1. ExpressRoute is used to connect the on-premises datacenter to Azure. You need to use Connection Monitor to identify network latency between VM1 and DC1. What should you install on DC1? A. the Azure Connected Machine agent for Azure Arc-enabled servers B. the Azure Network Watcher Agent virtual machine extension C. the Log Analytics agent D. an Azure Monitor agent extension
D. an Azure Monitor agent extension Connection Monitor utilizes the Azure Monitor agent extension to collect and report network performance data, including latency. The provided Microsoft Learn documentation supports this. Therefore, installing the Azure Monitor agent extension on DC1 is necessary to enable Connection Monitor to assess the network latency between VM1 and DC1. Why other options are incorrect: * **A. the Azure Connected Machine agent for Azure Arc-enabled servers:** This agent is for managing hybrid and multi-cloud environments through Azure Arc. It's not directly involved in network monitoring functionalities like Connection Monitor. * **B. the Azure Network Watcher Agent virtual machine extension:** While Network Watcher offers various network monitoring capabilities, Connection Monitor specifically works with the Azure Monitor agent. The Network Watcher agent is not required for Connection Monitor's functionality. * **C. the Log Analytics agent:** While older versions of Connection Monitor might have used the Log Analytics agent, the current implementation relies on the Azure Monitor agent, offering improved integration and capabilities. The discussion explicitly states that Connection Monitor now supports the Azure Monitor agent and eliminates any dependency on the Log Analytics agent. Note: The provided discussion indicates a clear preference for option D, citing updated Microsoft documentation. There is no evidence of conflicting opinions presented in the provided text.
79
**** [View Question](https://www.examtopics.com/discussions/databricks/view/121829-exam-az-104-topic-6-question-57-discussion/) You have an Azure subscription that has Traffic Analytics configured. You deploy a new virtual machine named VM1 that has the following settings: • Region: East US • Virtual network: VNet1 • NIC network security group: NSG1 You need to monitor VM1 traffic by using Traffic Analytics. Which settings should you configure? A. Diagnostic settings for VM1 B. NSG flow logs for NSG1 C. Diagnostic settings for NSG1 D. Insights for VM1 **
** B. NSG flow logs for NSG1 **Explanation:** Traffic Analytics uses data from Network Security Group (NSG) flow logs to monitor network traffic. By enabling NSG flow logs for NSG1 (the NSG associated with VM1), Traffic Analytics will be able to collect and analyze the traffic flowing through that NSG, effectively monitoring VM1's traffic. The other options are incorrect because they don't provide the necessary traffic data for Traffic Analytics. **Why other options are incorrect:** * **A. Diagnostic settings for VM1:** Diagnostic settings primarily collect logs and metrics from the VM itself, not network traffic flowing to and from it. * **C. Diagnostic settings for NSG1:** While NSGs have diagnostic settings, these wouldn't contain the network flow information needed by Traffic Analytics. * **D. Insights for VM1:** This is too general and doesn't specifically provide the network traffic data required for Traffic Analytics. **Note:** The discussion shows unanimous agreement on the correct answer.
80
**** [View Question](https://www.examtopics.com/discussions/databricks/view/121830-exam-az-104-topic-6-question-58-discussion/) You have an Azure subscription. The subscription contains 10 virtual machines that run Windows Server. Each virtual machine hosts a website in IIS and has the Azure Monitor Agent installed. You need to collect the IIS logs from each virtual machine and store them in a Log Analytics workspace. What should you configure first? A. a data collection endpoint B. an Azure Monitor Private Link Scope (AMPLS) C. Diagnostic settings D. VM insights E. a private endpoint **
** C. Diagnostic settings The correct answer is C because, for collecting IIS logs from VMs (IaaS resources), diagnostic settings are the initial configuration needed. While a Data Collection Endpoint (DCE) is used for collecting logs, it's a subsequent step *after* setting up diagnostic settings to specify *what* logs to collect and *where* to send them. Diagnostic settings define the data you're collecting and the destination workspace. The discussion highlights that diagnostic settings are not used for PaaS resources but are the appropriate method for collecting custom logs from IaaS VMs like those described in the question. **Why other options are incorrect:** * **A. a data collection endpoint:** A data collection endpoint is a component of the log collection process, but it's not the *first* step. You need to define what data will be collected via diagnostic settings before you set up a data collection endpoint to receive that data. * **B. an Azure Monitor Private Link Scope (AMPLS):** AMPLS is used for network connectivity and security, and it's not directly involved in the initial log collection configuration. * **D. VM insights:** VM insights provide performance metrics, but they are not directly involved in the process of collecting custom logs such as IIS logs. * **E. a private endpoint:** Similar to AMPLS, a private endpoint is a network construct and unrelated to the initial steps for collecting IIS logs. **Note:** The discussion shows some disagreement on the correct answer. Some users initially suggested 'A', a data collection endpoint. However, the prevailing and more accurate understanding is that diagnostic settings are configured first to define the data collection criteria before setting up the endpoint.
81
**** [View Question](https://www.examtopics.com/discussions/databricks/view/121863-exam-az-104-topic-2-question-95-discussion/) You purchase a new Azure subscription. You create an Azure Resource Manager (ARM) template named `deploy.json` as shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image764.png) You connect to the subscription and run the following command: `New-AzDeployment –Location westus –TemplateFile “deploy.json”` For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image765.png) **
** N N Y * **Statement 1 (The deployment creates three resource groups): No.** The ARM template creates four resource groups: RGS0, RGS1, ResGrp8, and RGroup4. The `copyIndex()` function in the first resource group definition creates two resource groups (RGS0 and RGS1). The second creates ResGrp8. The third uses the `length()` function on the `obj1` parameter, which has a length of 4, resulting in RGroup4. * **Statement 2 (The deployment creates a resource group named RGroup5): No.** The template does not create a resource group with this name. * **Statement 3 (All resource groups are created in the East US Azure region): Yes.** Despite the `New-AzDeployment` command specifying `–Location westus`, the location of the resource groups is determined by the `location` parameter *within* the ARM template itself. The `deploy.json` file explicitly sets the location to "eastus" for all resource groups. The `–Location` parameter in the deployment command only specifies the location for *storing the deployment data*, not the location of the resources created by the template. **Why other options are incorrect:** The discussion shows some initial confusion regarding the location parameter. While the command specifies `westus`, the ARM template itself overrides this, placing all resource groups in `eastus`. The disagreement centers on whether the command-line parameter affects the resource group location, which it does not in this case. The correct answer reflects the behaviour dictated by the ARM template's explicit location settings.
82
[View Question](https://www.examtopics.com/discussions/databricks/view/121869-exam-az-104-topic-4-question-99-discussion/) You have an Azure subscription. You plan to deploy the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image767.png) You need to create a single Azure Resource Manager (ARM) template that will be used to deploy the resources. Which resource should be added to the `dependsOn` section for VM1? A. VNET1 B. NIC1 C. IP1 D. NSG1
The correct answer is **B. NIC1**. A virtual machine (VM) requires a network interface card (NIC) to connect to a virtual network. The VM cannot function without a NIC to provide network connectivity. Therefore, the NIC must be deployed before the VM. The `dependsOn` section in an ARM template specifies dependencies between resources; in this case, VM1 depends on NIC1. Why other options are incorrect: * **A. VNET1:** While the VM ultimately needs a VNET, the NIC acts as the intermediary. The VM directly depends on the NIC, which in turn depends on the VNET. * **C. IP1:** The Public IP address is assigned to the NIC, not directly to the VM. The VM depends on the NIC having an IP address, but that's handled implicitly through the NIC's configuration. The direct dependency is on the NIC itself. * **D. NSG1:** A Network Security Group (NSG) controls network traffic to and from the NIC. While important for security, the VM doesn't directly depend on the NSG for its creation or functionality. The NIC is the necessary element. Note: The provided discussion shows unanimous agreement on answer B.
83
**** [View Question](https://www.examtopics.com/discussions/databricks/view/122150-exam-az-104-topic-3-question-82-discussion/) You need to create an Azure Storage account named storage1. The solution must meet the following requirements: • Support Azure Data Lake Storage. • Minimize costs for infrequently accessed data. • Automatically replicate data to a secondary Azure region. Which three options should you configure for storage1? Each correct answer presents part of the solution. NOTE: Each correct answer is worth one point. A. zone-redundant storage (ZRS) B. the Cool access tier C. geo-redundant storage (GRS) D. the Hot access tier E. hierarchical namespace **
** The correct answer is B, C, and E. * **B. the Cool access tier:** This tier is designed for infrequently accessed data, minimizing storage costs as required. * **C. geo-redundant storage (GRS):** GRS automatically replicates data to a secondary region, fulfilling the requirement for data replication. * **E. hierarchical namespace:** This is a requirement for Azure Data Lake Storage, enabling the file system structure needed. **Why other options are incorrect:** * **A. zone-redundant storage (ZRS):** While ZRS provides redundancy, it doesn't fulfill the requirement of replication to a *secondary Azure region*. GRS (option C) is the correct choice for regional redundancy. * **D. the Hot access tier:** The Hot tier is optimized for frequently accessed data and is more expensive than the Cool tier. The question specifies the need to minimize costs for *infrequently* accessed data. There is a consensus among the users in the discussion that BCE is the correct answer.
84
**** [View Question](https://www.examtopics.com/discussions/databricks/view/122369-exam-az-104-topic-3-question-81-discussion/) You have an Azure subscription that contains a storage account named storage1. The storage1 account contains a container named container1. You need to configure access to container1. The solution must meet the following requirements: • Only allow read access. • Allow both HTTP and HTTPS protocols. • Apply access permissions to all the content in the container. What should you use? A. an access policy B. a shared access signature (SAS) C. Azure Content Delivery Network (CDN) D. access keys **
** B. a shared access signature (SAS) A Shared Access Signature (SAS) allows granting temporary, limited access to Azure Storage resources. It fulfills all the requirements: it can be configured to allow only read access, it works with both HTTP and HTTPS, and it applies permissions to all content within the specified container. **Why other options are incorrect:** * **A. an access policy:** While access policies can control access, they don't offer the granular control and time-limited access provided by SAS tokens. They are more for managing permissions for users and applications over a longer term rather than specific time-limited access. * **C. Azure Content Delivery Network (CDN):** A CDN is for caching and distributing content, not directly for controlling access to the storage container itself. * **D. access keys:** Access keys provide full control over the entire storage account, which is far too broad and violates the requirement for only granting read access. Using access keys directly would be a security risk. The discussion shows a strong consensus that a SAS is the correct answer.
85
[View Question](https://www.examtopics.com/discussions/databricks/view/122479-exam-az-104-topic-6-question-54-discussion/) You have an Azure subscription that contains an Azure SQL database named DB1. You plan to use Azure Monitor to monitor the performance of DB1. You must be able to run queries to analyze log data. Which destination should you configure in the Diagnostic settings of DB1? A. Send to a Log Analytics workspace. B. Archive to a storage account. C. Stream to an Azure event hub.
A. Send to a Log Analytics workspace. To analyze log data from Azure SQL Database DB1 using Azure Monitor, the logs must be sent to a location that supports querying. A Log Analytics workspace provides this capability. Azure Monitor Logs allows you to query and analyze the data within the workspace. Why other options are incorrect: * **B. Archive to a storage account:** Archiving to a storage account stores the data, but it does not offer the query capabilities needed to analyze the performance data. The data would need to be extracted and processed separately before analysis is possible. * **C. Stream to an Azure event hub:** An event hub is designed for real-time ingestion and processing of high-volume data streams. While the data could be processed later, it's not inherently designed for the type of querying and analysis needed for performance monitoring of a database. Additional steps are needed to query and analyze the data. The discussion shows a strong consensus among users that option A is the correct answer.
86
[View Question](https://www.examtopics.com/discussions/databricks/view/122654-exam-az-104-topic-2-question-94-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.com. You have a CSV file that contains the names and email addresses of 500 external users. You need to create a guest user account in contoso.com for each of the 500 external users. Solution: You create a PowerShell script that runs the New-MgUser cmdlet for each user. Does this meet the goal? A. Yes B. No
B. No The `New-MgUser` cmdlet creates new users within the Azure AD tenant, but it does not specifically handle the creation of *guest* users from external sources. To add external users as guests, the correct cmdlet is `New-MgInvitation` or `New-AzureADMSInvitation` (depending on the version of the module being used). The discussion highlights the disagreement, with several users pointing out that `New-MgUser` is not suitable for this task and recommending `New-MgInvitation` or `New-AzureADMSInvitation` instead. Therefore, using `New-MgUser` does not meet the goal of creating guest accounts for external users. Why other options are incorrect: A. Yes - Incorrect. As explained above, `New-MgUser` is not designed for creating guest users from external sources. It creates regular users within the directory.
87
[View Question](https://www.examtopics.com/discussions/databricks/view/125036-exam-az-104-topic-6-question-60-discussion/) You have an Azure subscription that contains multiple virtual machines in the West US Azure region. You need to use Traffic Analytics in Azure Network Watcher to monitor virtual machine traffic. Which two resources should you create? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. a Log Analytics workspace B. an Azure Monitor workbook C. a storage account D. a Microsoft Sentinel workspace E. a Data Collection Rule (DCR) in Azure Monitor
The correct answer is A and C. To use Traffic Analytics with Azure Network Watcher, you need a Log Analytics workspace (A) to store the collected traffic data and a storage account (C) to store the Network Watcher logs. While a Data Collection Rule (DCR) might seem relevant, the documentation indicates that Traffic Analytics manages these automatically; manual configuration can disrupt the service. There is disagreement in the provided discussion. One user suggests A and E are correct, while another asserts that A and C are correct and explains that Traffic Analytics automatically handles the Data Collection Rules. The provided Microsoft documentation supports the latter assertion.
88
[View Question](https://www.examtopics.com/discussions/databricks/view/125037-exam-az-104-topic-6-question-61-discussion/) You have an Azure subscription that contains a virtual machine named VM1. You plan to deploy an Azure Monitor alert rule that will trigger an alert when CPU usage on VM1 exceeds 80 percent. You need to ensure that the alert rule sends an email message to two users named User1 and User2. What should you create for Azure Monitor? A. an action group B. a mail-enabled security group C. a distribution group D. a Microsoft 365 group
The correct answer is **A. an action group**. Azure Monitor alert rules use action groups to define the actions to be taken when an alert is triggered. An action group can be configured to send email notifications to specified recipients, such as User1 and User2 in this scenario. Why other options are incorrect: * **B. a mail-enabled security group:** While a mail-enabled security group can receive emails, it's not directly integrated with Azure Monitor alert rules for triggering actions. * **C. a distribution group:** Similar to a mail-enabled security group, a distribution group is not designed for direct integration with Azure Monitor alert rules. * **D. a Microsoft 365 group:** A Microsoft 365 group is a collaboration tool and doesn't have the functionality to directly receive and act on Azure Monitor alerts. The discussion shows overwhelming consensus on answer A.
89
**** [View Question](https://www.examtopics.com/discussions/databricks/view/125048-exam-az-104-topic-6-question-59-discussion/) You have an Azure subscription that contains two storage accounts named contoso101 and contoso102. The subscription contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-104/image794.png) VNet1 has service endpoints configured as shown in the Service endpoints exhibit. (Click the Service endpoints tab.) ![Image](https://img.examtopics.com/az-104/image795.png) The Microsoft.Storage service endpoint has the service endpoint policy shown in the Microsoft.Storage exhibit. (Click the Microsoft.Storage tab.) ![Image](https://img.examtopics.com/az-104/image796.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image797.png) **
** NNY * **N:** VM1 in subnet1 can access storage account contoso101 because the service endpoint policy allows access from subnet1 and contoso101 is specified in the policy. However, VM1 in subnet1 *cannot* access storage account contoso102 because the policy only includes contoso101. * **N:** VM2 in subnet2 cannot access storage account contoso101 because the service endpoint for Microsoft.Storage is only configured for subnet1, not subnet2. * **Y:** VM2 in subnet2 *can* access Azure Active Directory using its private IP address because a service endpoint for Azure AD is configured for subnet2. This allows secure access without needing a public IP address. **Explanation of why other options are incorrect:** The provided solution (NNY) is based on the configuration shown in the images and the limitations of service endpoints within Azure. A different configuration of service endpoints or storage account access policies could lead to different results. The answer reflects an interpretation of the provided information; there is no explicit mention of conflicting opinions in the provided discussion.
90
[View Question](https://www.examtopics.com/discussions/databricks/view/125058-exam-az-104-topic-3-question-85-discussion/) You have an Azure subscription that contains the devices shown in the following table. ![Image](https://img.examtopics.com/az-104/image774.png) On which devices can you install Azure Storage Explorer? A. Device1 only B. Device1 and Device2 only C. Device1 and Device3 only D. Device1, Device2, and Device3 only E. Device1, Device3, and Device4 only
D. Device1, Device2, and Device3 only Azure Storage Explorer is compatible with Windows, macOS, and Linux. The image shows Device 1 is Windows, Device 2 is macOS, and Device 3 is Linux. Therefore, Azure Storage Explorer can be installed on all three devices. Device 4 (iOS) is not supported. Why other options are incorrect: * **A, B, C, and E:** These options exclude at least one of the devices running a supported operating system (Windows, macOS, or Linux). The consensus among the discussion users supports option D.
91
**** [View Question](https://www.examtopics.com/discussions/databricks/view/125059-exam-az-104-topic-3-question-86-discussion/) HOTSPOT - You have an Azure subscription. You plan to create an Azure Storage account as shown in the following exhibit. (Image shows a screenshot of Azure Storage Account creation with options for Account name, Location, Account kind, Performance, Replication, and Access tier) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. (Image shows a table with the following columns: Statement, Answer choice, and a drop-down menu for each row. The statements are partially visible: "Data is stored in...", "Infrequently accessed data can be stored cost-efficiently on the...") **
** The provided text does not give the exact statements to complete, only fragments. However, based on the discussion and suggested answer (which is unfortunately not fully visible in the provided text, only an image showing a selection is present), the correct answers likely relate to Azure Storage redundancy and access tiers. * **Data is stored in...**: The answer is likely **three different copies**, either within a data center or across different availability zones or regions. This is because Azure Storage offers various redundancy options to ensure high availability and durability. * **Infrequently accessed data can be stored cost-efficiently on the...**: The answer is likely **Cold access tier**. This tier is designed for infrequently accessed data and offers the lowest cost storage option in Azure Blob Storage. The discussion shows general agreement on this answer. There is no visible disagreement about the core principles, but some users only provide short confirmation of the solution's accuracy. The exact options in the drop downs are not fully provided and therefore only a general answer can be provided.
92
[View Question](https://www.examtopics.com/discussions/databricks/view/125073-exam-az-104-topic-3-question-84-discussion/) You have an Azure subscription that contains the storage accounts shown in the following table. | Name | Kind | Replication | |-------------|------------------------------------|---------------| | storage1 | Standard general-purpose v1 (StorageV1) | Locally Redundant Storage (LRS) | | storage2 | Standard general-purpose v2 (StorageV2) | Locally Redundant Storage (LRS) | | storage3 | Blob Storage | Geo-Redundant Storage (GRS) | | storage4 | Premium block blobs (BlockBlobStorage) | Locally Redundant Storage (LRS) | Which storage account can be converted to zone-redundant storage (ZRS) replication? A. storage1 B. storage2 C. storage3 D. storage4
B. storage2 Explanation: According to the provided Microsoft documentation links within the discussion, only Standard general-purpose v2 (StorageV2), Premium block blobs (BlockBlobStorage), and Premium file shares (FileStorage) storage accounts with Locally Redundant Storage (LRS) replication can be converted to Zone-Redundant Storage (ZRS). Storage account 'storage2' meets this criteria. Why other options are incorrect: * **A. storage1:** This is a Standard general-purpose v1 (StorageV1) account, which is not eligible for conversion to ZRS. * **C. storage3:** This account uses Geo-Redundant Storage (GRS) replication. While some types of accounts can be converted from GRS to ZRS, it requires an intermediary step of converting to LRS first, before converting to ZRS. The question doesn't specify if such a conversion is allowed within the scope of the question. * **D. storage4:** While a Premium block blob storage account is eligible for ZRS conversion, this is already LRS. Note: There is some disagreement in the discussion regarding whether storage4 should also be a correct answer. The dominant view supports only storage2 as the correct answer based on the explicit criteria mentioned in the Microsoft documentation and the understanding that a direct conversion from LRS to ZRS is the intended scenario.
93
[View Question](https://www.examtopics.com/discussions/databricks/view/125087-exam-az-104-topic-2-question-99-discussion/) You have an Azure subscription named Sub1 that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image769.png) You create a user named Admin1. To what can you add Admin1 as a co-administrator? A. RG1 B. MG1 C. Sub1 D. VM1
C. Sub1 Explanation: The correct answer is Sub1 because co-administrators in Azure can only be assigned at the subscription level. You cannot assign a co-administrator role to a resource group (RG1), management group (MG1), or individual virtual machine (VM1). A co-administrator assigned at the subscription level has full access to all resources within that subscription. Why other options are incorrect: * **A. RG1 (Resource Group):** Co-administrators are not assigned at the resource group level. * **B. MG1 (Management Group):** Co-administrators are not assigned at the management group level. * **D. VM1 (Virtual Machine):** Co-administrators are not assigned at the individual virtual machine level. Note: The provided discussion shows a unanimous agreement on the correct answer.
94
**** [View Question](https://www.examtopics.com/discussions/databricks/view/125088-exam-az-104-topic-3-question-89-discussion/) You have an Azure subscription. The subscription contains a storage account named storage1 that has the lifecycle management rules shown in the following table. ![Image](https://img.examtopics.com/az-104/image782.png) On June 1, you store two blobs in storage1 as shown in the following table. ![Image](https://img.examtopics.com/az-104/image783.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image784.png) **
** The correct answers are: * **On June 6, File 1 will be stored in the Cool access tier: No.** Rule 1 states that blobs in container1 are moved to Archive storage after 3 days. Therefore, on June 4th, File1 would be moved to Archive storage. It cannot be directly moved to Cool storage from Archive storage. Rehydration to Hot storage is required before moving to Cool, a process taking up to 15 hours. * **On June 7, File 2 will be stored in the Cool access tier: Yes.** Rule 5 applies to all blobs in container2 after 5 days. As June 7th is 6 days after June 1st, File 2 will be in Cool storage. * **On June 16, File 2 will be stored in the Archive access tier: No.** Rule 3 dictates that blobs in container2 are deleted after 15 days. Therefore, File 2 will be deleted on June 16th. **Why other options are incorrect:** The provided discussion clearly explains why each option is correct or incorrect based on the application of the defined lifecycle management rules. The incorrect options fail to account for the specific rule applied to each file and the limitations of direct tier transitions in Azure Blob Storage. **Note:** The discussion shows a general consensus on the correct answers. However, there was a back-and-forth with Copilot, indicating some initial uncertainty which has since been resolved.
95
**** [View Question](https://www.examtopics.com/discussions/databricks/view/125124-exam-az-104-topic-5-question-147-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-104/image791.png) Each virtual network has 50 connected virtual machines. You need to implement Azure Bastion. The solution must meet the following requirements: * Support host scaling. * Support uploading and downloading files. * Support the virtual machines on both VNet1 and VNet2. * Minimize the number of addresses on the Azure Bastion subnet. How should you configure Azure Bastion? To answer, select the options in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image792.png) **
** To meet all requirements, Azure Bastion should be configured with a subnet size of `/26` and a Standard Public IP address with static allocation. * **Subnet Size (/26):** A `/26` subnet is recommended (and often required, depending on Azure Bastion version) for host scaling. Smaller subnet sizes limit the number of scale units Azure Bastion can utilize. While a larger subnet would also work, `/26` minimizes the number of addresses used as requested. The discussion highlights that a `/26` subnet provides 59 usable IP addresses after Azure reserves some for internal use. * **Public IP (Standard SKU with Static Allocation):** Only the Standard SKU of the Public IP address supports host scaling and file uploading/downloading, which are explicit requirements. A static allocation ensures the IP address remains consistent, vital for ongoing connectivity to the Bastion. The discussion shows a consensus among users that this configuration is correct. **Why other options are incorrect:** The question doesn't present other options explicitly in the text, but implicitly, any configuration not using a `/26` subnet or a Standard Public IP with static allocation would be incorrect because it would fail to meet one or more of the stated requirements (host scaling, file transfer, minimizing subnet size). Using a Basic Public IP or a smaller subnet would violate these requirements.
96
**** [View Question](https://www.examtopics.com/discussions/databricks/view/125160-exam-az-104-topic-3-question-88-discussion/) You have an Azure subscription that contains a storage account named storage1. The storage1 account contains blobs in a container named container1. You plan to share access to storage1. You need to generate a shared access signature (SAS). The solution must meet the following requirements: • Ensure that the SAS can only be used to enumerate and download blobs stored in container1. • Use the principle of least privilege. Which three settings should you enable? To answer, select the appropriate settings in the answer area. ![Image](https://img.examtopics.com/az-104/image780.png) (Image shows a selection screen with options for: Allowed resource types: (Blob, Container, Object) and Allowed permissions: (List, Read, Add, Write, Delete, Update, Process)) **
** The correct settings to enable are: * **Allowed resource types:** Container * **Allowed permissions:** List * **Allowed permissions:** Read **Explanation:** The goal is to allow only enumeration (listing) and downloading (reading) of blobs within the specified container. Selecting "Container" as the allowed resource type grants access to all blobs within that container, and their metadata, fulfilling the requirement of accessing all the blobs. "Object" is a subset of "Container" and thus redundant, following the principle of least privilege. "List" permission allows enumeration of blobs within the container. "Read" permission allows downloading the blobs. Using only "List" and "Read" adheres to the principle of least privilege, granting only the necessary permissions. **Why other options are incorrect:** * **Allowed resource types: Blob or Object:** While these might seem to work, they are not the most restrictive options. Selecting "Container" grants the same access but encompasses "Blob" and "Object" making it the most efficient and secure solution. Selecting only "Object" limits access to specific blobs, while this requirement is to access all blobs in the container. * **Allowed permissions: Add, Write, Delete, Update, Process:** These permissions are unnecessary for simply enumerating and downloading blobs and violate the principle of least privilege. **Note:** The provided discussion shows a consensus on the correct answer. There is no evidence of conflicting opinions.
97
**** [View Question](https://www.examtopics.com/discussions/databricks/view/125198-exam-az-104-topic-2-question-96-discussion/) Your on-premises network contains a VPN gateway. You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image768.png) You need to ensure that all the traffic from VM1 to storage1 travels across the Microsoft backbone network. What should you configure? A. Azure AD Application Proxy B. private endpoints C. a network security group (NSG) D. Azure Peering Service **
** B. Private endpoints Private endpoints provide a private connection to Azure services, routing traffic through the Microsoft backbone network and avoiding the public internet. This aligns perfectly with the requirement to ensure all traffic from VM1 to storage1 uses the Microsoft backbone. **Why other options are incorrect:** * **A. Azure AD Application Proxy:** This is for providing secure access to on-premises applications, not for routing network traffic between Azure resources. * **C. Network Security Group (NSG):** An NSG controls network traffic based on rules, but it doesn't inherently force traffic to use the Microsoft backbone network. It manages traffic flow, not the underlying network path. * **D. Azure Peering Service:** This enables connectivity between your on-premises network and Azure, but it doesn't guarantee that traffic between two Azure resources will use the Microsoft backbone. It focuses on external connectivity, not internal. **Note:** The provided discussion strongly supports the answer as private endpoints. There is no conflicting information presented.
98
[View Question](https://www.examtopics.com/discussions/databricks/view/125199-exam-az-104-topic-2-question-97-discussion/) Your on-premises network contains a VPN gateway. You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image768.png) You need to ensure that all the traffic from VM1 to storage1 travels across the Microsoft backbone network. What should you configure? A. Azure AD Application Proxy B. service endpoints C. a network security group (NSG) D. Azure Firewall
B. service endpoints Service endpoints provide a secure and optimized route for traffic between your virtual network and Azure services over the Microsoft backbone network. By configuring a service endpoint for Storage on the VNet containing VM1, traffic destined for storage1 will be routed directly through the Microsoft backbone, avoiding the public internet. Why other options are incorrect: * **A. Azure AD Application Proxy:** This is used for secure access to on-premises applications, not for routing traffic between Azure resources. * **C. a network security group (NSG):** NSGs control network traffic based on rules, but they don't guarantee traffic will use the Microsoft backbone network. While an NSG *could* be used in conjunction with service endpoints for additional security, it's not the primary solution to the problem. * **D. Azure Firewall:** Azure Firewall is a more advanced network security solution, again, not directly involved in routing traffic specifically over the Microsoft backbone. Note: There is some discussion in the provided text regarding the difference between private endpoints and service endpoints. While private endpoints also offer secure connectivity, the question specifically asks about using the Microsoft backbone network. Service endpoints are the most direct way to achieve this.
99
**** [View Question](https://www.examtopics.com/discussions/databricks/view/125312-exam-az-104-topic-3-question-83-discussion/) You have an Azure Storage account named storage1 that contains two containers named container1 and container2. Blob versioning is enabled for both containers. You periodically take blob snapshots of critical blobs. You create the following lifecycle management policy: ![Image](https://img.examtopics.com/az-104/image770.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image771.png) **
** YNN * **Statement 1 (Tiering to Cool): Yes.** The policy includes a rule that moves blobs in container1 with a prefix of "container1/" to the Cool tier after 15 days. The image shows this explicitly. * **Statement 2 (Rule 1 applies to container2): No.** The policy explicitly targets only blobs in `container1` due to the `prefixMatch` condition ("container1/"). Blobs in `container2` are not affected by this rule. * **Statement 3 (Archive after 30 days): No.** While the policy indicates archiving after 30 days for blobs in `container1`, rehydrated files (moved from Archive to Hot or Cool and then back) are not automatically re-archived after another 30 days. The policy requires a 7-day gap since the last tier change before archiving can occur again. Also, blob snapshots are not explicitly included in the policy. **Explanation of why other options are incorrect:** The discussion reveals disagreement on the correct answer. Some users suggest YNN, while others suggest NNN. The provided explanation leans towards YNN, based on a careful reading of the provided lifecycle policy. The disagreement likely stems from ambiguity around the handling of rehydrated blobs and the explicit exclusion of snapshots in the policy. The NNN suggestion incorrectly interprets the policy regarding rehydration and also ignores the clear definition of the tiering to Cool policy. The lack of explicit inclusion of snapshots in the policy also contributes to the ambiguity.
100
**** [View Question](https://www.examtopics.com/discussions/databricks/view/125314-exam-az-104-topic-3-question-87-discussion/) HOTSPOT - You have an Azure Storage account named storage1 that contains a container named container1. The container1 container stores thousands of image files. You plan to use an Azure Resource Manager (ARM) template to create a blob inventory rule named rule1. You need to ensure that only blobs whose names start with the word finance are stored daily as a CSV file in container1. How should you complete rule1? To answer, select the options in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image778.png) *(Image content not provided, but context suggests it shows a partially completed ARM template for a blob inventory rule)* **
** The correct answer is represented by the image ![Image](https://img.examtopics.com/az-104/image779.png) (Image content not provided, but based on the question and discussion, it likely shows the correctly completed ARM template fields to filter for blobs starting with "finance" and specifying a daily CSV output). The explanation comes from understanding the requirements: the rule needs to filter blobs based on their name prefix ("finance") and output to a CSV file daily. The suggested answer reflects the correct configuration within the ARM template to achieve this. The discussion mentions that BlockBlob is the most suitable type for CSV files, which indirectly supports the correctness of the suggested answer. **Why other options are incorrect:** Without access to the content of the images (image778.png and image779.png), it's impossible to list and explain the specific incorrect options. However, incorrect options would likely involve: incorrect filter criteria (e.g., not specifying "finance" as the prefix), incorrect output type (e.g., not specifying CSV), incorrect frequency (e.g., not specifying daily), or incorrect destination container. **Note:** There is no explicit disagreement in the provided discussion. The user comment only reinforces the choice of BlockBlob, which is implied as correct by the suggested answer. However, without seeing the multiple-choice options represented in the image, a more comprehensive explanation of why other choices are incorrect cannot be provided.
101
**** [View Question](https://www.examtopics.com/discussions/databricks/view/125320-exam-az-104-topic-4-question-102-discussion/) Your company purchases a new Azure subscription. You create a file named Deploy.json as shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image787.png) You connect to the subscription and run the following cmdlet: `New-AzDeployment -Location westus -TemplateFile “deploy.json”` For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image788.png) **
** YNY The correct answer is YNY (Yes, No, Yes). The deployment script creates three resource groups (RG0, RG1, RG2) because the `copyIndex()` function in the JSON template iterates three times, creating resource groups with names based on the index (0, 1, and 2). * **Yes:** You can deploy to RG1 because it has a delete lock. A delete lock prevents deletion of the resource group, but it doesn't prevent deployments *within* the resource group. * **No:** You cannot deploy to RG2 because it has a read-only lock. A read-only lock prevents any modifications to the resource group, including new deployments. * **Yes:** You can deploy to RG3 because it doesn't exist yet. The script only creates RG0, RG1, and RG2. RG3 can be created manually. There is a consensus among the users in the discussion that YNY is the correct answer. While some users only state the answer, others provide explanations supporting the logic behind the YNY sequence. The explanation references Microsoft documentation on resource locks to solidify the reasoning.
102
[View Question](https://www.examtopics.com/discussions/databricks/view/125321-exam-az-104-topic-5-question-146-discussion/) You have an Azure subscription. You plan to migrate 50 virtual machines from VMware vSphere to the subscription. You create a Recovery Services vault. What should you do next? A. Configure an extended network. B. Create a recovery plan. C. Deploy an Open Virtualization Application (OVA) template to vSphere. D. Configure a virtual network.
D. Configure a virtual network. To migrate VMs from VMware vSphere to Azure using Azure Site Recovery, a virtual network in Azure is required to host the migrated VMs. The Recovery Services vault is a prerequisite, but the next step is to prepare the Azure target environment by configuring the virtual network. The extended network (option A) is configured *after* the initial virtual network is set up. Creating a recovery plan (option B) comes later in the process, after the target network is ready. Deploying an OVA to vSphere (option C) is irrelevant since the VMs are already in vSphere; the migration is *from* vSphere *to* Azure. There is a disagreement in the discussion regarding the correct answer. While the majority agree on option D, one user suggests option C. The explanation above reflects the majority opinion and aligns with the general Azure Site Recovery workflow.
103
[View Question](https://www.examtopics.com/discussions/databricks/view/125346-exam-az-104-topic-4-question-101-discussion/) You have an Azure subscription that contains a resource group named RG1. You plan to create a storage account named storage1. You have a Bicep file named File1. You need to modify File1 so that it can be used to automate the deployment of storage1 to RG1. Which property should you modify? A. kind B. scope C. sku D. location
B. scope The `scope` property in a Bicep file specifies the target resource group or subscription where the resources defined in the file should be deployed. To deploy `storage1` to `RG1`, you would modify the `scope` property in `File1` to target `RG1`. Why other options are incorrect: * **A. kind:** The `kind` property defines the type of the storage account (e.g., StorageV2, BlobStorage). This is not relevant to specifying the deployment location. * **C. sku:** The `sku` property specifies the pricing tier and capacity of the storage account. This doesn't determine where the storage account is deployed. * **D. location:** While the `location` property specifies the Azure region where the storage account will reside, the `scope` property determines the resource group where it's deployed *within* that region. The `scope` property dictates the parent container for the resource. The `location` property dictates the geographical area. Note: The provided discussion shows unanimous agreement on the correct answer.
104
[View Question](https://www.examtopics.com/discussions/databricks/view/125368-exam-az-104-topic-4-question-100-discussion/) You have an Azure subscription. You create the following Azure Resource Manager (ARM) template named Template.json. ![Image](https://img.examtopics.com/az-104/image786.png) You need to deploy Template.json. Which PowerShell cmdlet should you run from Azure Cloud Shell? A. New-AzSubscriptionDeployment B. New-AzManagementGroupDeployment C. New-AzResourceGroupDeployment D. New-AzTenantDeployment
The correct answer is **A. New-AzSubscriptionDeployment**. The image shows an ARM template, and the question asks to deploy it. While the provided image is missing, the context of the discussion strongly suggests that the template is designed for deployment across the entire subscription. `New-AzSubscriptionDeployment` is the appropriate cmdlet for this. `New-AzResourceGroupDeployment` (Option C) is used to deploy resources within a specific resource group, not the entire subscription. `New-AzManagementGroupDeployment` (Option B) deploys resources across multiple subscriptions within a management group. `New-AzTenantDeployment` (Option D) is used for organization-wide deployments, which is generally less common. There is some disagreement in the discussion. Some comments suggest that `New-AzResourceGroupDeployment` might be used if the template creates a new resource group as part of the deployment process; however, the question doesn't specify resource group creation. The prevailing and most accurate interpretation is that the question is referring to a subscription-level deployment.
105
[View Question](https://www.examtopics.com/discussions/databricks/view/125411-exam-az-104-topic-4-question-103-discussion/) You have an Azure subscription that contains the resources shown in the following table. | Resource Name | Resource Type | Location | |---|---|---| | VMSS1 | Virtual Machine Scale Set | West US | | Proximity1 | Proximity Placement Group | West US | | Proximity2 | Proximity Placement Group | East US | | Proximity3 | Proximity Placement Group | West Europe | You need to configure a proximity placement group for VMSS1. Which proximity placement groups should you use? A. Proximity2 only B. Proximity1, Proximity2, and Proximity3 C. Proximity1 only D. Proximity1 and Proximity3 only
C. Proximity1 only Proximity placement groups ensure Azure compute resources are physically close, minimizing latency. Since VMSS1 is located in West US, and only Proximity1 is also in West US, only Proximity1 is suitable. Using other proximity placement groups in different regions would negate the benefit of proximity placement. The discussion shows overwhelming agreement on this answer. There is no significant disagreement regarding the correct option.
106
[View Question](https://www.examtopics.com/discussions/databricks/view/126127-exam-az-104-topic-2-question-98-discussion/) Your on-premises network contains a VPN gateway. You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image768.png) You need to ensure that all the traffic from VM1 to storage1 travels across the Microsoft backbone network. What should you configure? A. Azure Application Gateway B. service endpoints C. a network security group (NSG) D. Azure Peering Service
The correct answer is **B. service endpoints**. Service endpoints ensure that traffic between VMs and Azure storage stays within the Microsoft Azure backbone network. This improves performance, security, and reduces latency. Why other options are incorrect: * **A. Azure Application Gateway:** Application Gateways are used for load balancing and web application firewall functionality, not for ensuring traffic routing across the Microsoft backbone. * **C. a network security group (NSG):** NSGs control network traffic based on rules, but they don't inherently guarantee traffic will use the Microsoft backbone network. * **D. Azure Peering Service:** Peering services connect your on-premises network to Azure, but it doesn't automatically restrict traffic to the Microsoft backbone for communication between Azure resources. Note: The discussion includes anecdotal evidence of this question appearing on an exam and the suggested answer being correct, however there is no official verification of this.
107
[View Question](https://www.examtopics.com/discussions/databricks/view/129668-exam-az-104-topic-5-question-148-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-104/image816.png) You need to ensure that all the traffic between VNet1 and VNet2 traverses the Microsoft backbone network. What should you configure? A. a private endpoint B. peering C. Express Route D. a route table
B. Peering Explanation: Peering allows two virtual networks to communicate as if they were a single network, utilizing the Microsoft backbone network for traffic flow. The discussion and suggested answer confirm this. Using peering is the recommended approach for inter-VNet communication, offering optimal performance. Why other options are incorrect: * **A. a private endpoint:** Private endpoints provide private access to Azure services, not connectivity between virtual networks. The discussion highlights the initial confusion some users had due to the term "endpoint", but it is not the correct solution. * **C. Express Route:** While ExpressRoute can connect multiple virtual networks (private peering), Microsoft explicitly advises against this for inter-VNet connectivity, recommending VNet peering instead for better performance. * **D. a route table:** Route tables manage routing within a single virtual network or between a virtual network and on-premises networks. They don't establish connectivity between virtual networks directly. Note: The discussion shows some initial confusion about the correct answer, with several users initially suggesting a private endpoint. However, the consensus and provided links ultimately support peering as the correct and recommended solution.
108
[View Question](https://www.examtopics.com/discussions/databricks/view/129669-exam-az-104-topic-5-question-149-discussion/) You have the Azure virtual networks shown in the following table. | Name | Address space | Subnet | Resource group | Azure region | |---|---|---|---|---| | VNet1 | 10.11.0.0/16 | 10.11.0.0/17 | | West US | | VNet2 | 10.11.0.0/17 | 10.11.0.0/25 | | West US | | VNet3 | 10.10.0.0/22 | 10.10.1.0/24 | | East US | | VNet4 | 192.168.16.0/22 | 192.168.16.0/24 | | North Europe | Which virtual networks can you peer with VNet1? A. VNet2, VNet3, and VNet4 B. VNet2 only C. VNet3 and VNet4 only D. VNet2 and VNet3 only
C. VNet3 and VNet4 only VNet1 cannot be peered with VNet2 because their address spaces overlap (10.11.0.0/16 and 10.11.0.0/17). VNet peering requires non-overlapping IP address spaces. VNet3 and VNet4 have address spaces that do not overlap with VNet1's address space, making them suitable for peering. Why other options are incorrect: * **A. VNet2, VNet3, and VNet4:** Incorrect because VNet1 and VNet2 have overlapping address spaces. * **B. VNet2 only:** Incorrect because VNet1 and VNet2 have overlapping address spaces. * **D. VNet2 and VNet3 only:** Incorrect because VNet1 and VNet2 have overlapping address spaces. Note: The discussion shows some disagreement on whether the subnet or the address space needs to be considered for overlap. The correct answer is based on the principle that overlapping *address spaces* prevent VNet peering.
109
**** [View Question](https://www.examtopics.com/discussions/databricks/view/129670-exam-az-104-topic-3-question-90-discussion/) You have an Azure Storage account named contoso2024 that contains the resources shown in the following table. (Image depicting a table of storage resources including File1 and File2 is present but not directly quoted here because it's visual.) You have users that have permissions for contoso2024 as shown in the following table. (Image depicting a table showing user permissions is present but not directly quoted here because it's visual.) The contoso2024 account is configured as shown in the following exhibit. (Image depicting storage account configuration, showing that access keys are disabled, is present but not directly quoted here because it's visual.) For each of the following statements, select Yes if the statement is true. Otherwise, select No. (Image depicting three statements to evaluate: 1. User1 can read File1. 2. User2 can read File2. 3. User3 can read File3.) **
** No - Yes - No * **Statement 1 (User1 can read File1): No.** User1 does not have the necessary roles (Storage Blob Data Reader or Storage Blob Data Contributor) to directly access File1. The discussion highlights that even with read access to the storage account, access might be denied due to the disabled access keys. While there's some discussion regarding SAS tokens, the provided information does not definitively state User1 has the necessary SAS tokens with sufficient permissions to access File1. * **Statement 2 (User2 can read File2): Yes.** User2 has the Storage Account Contributor role, which grants extensive permissions, including the ability to read File2. * **Statement 3 (User3 can read File3): No.** The question mentions the access keys are disabled on the storage account. The discussion confirms that disabling the access keys prevents access even with the appropriate permissions unless a User Delegation SAS token is used. The provided information does not confirm that User3 has such a token. **Why other options are incorrect:** The suggested answers in the discussion vary and reflect some uncertainty and differing experiences. Some answers suggest a "YNN" pattern but the prevailing consensus is that only User2 can read their respective file due to their role, and access key being disabled. The discussion highlights the critical role of disabled access keys and SAS tokens in determining access. There is no consensus on whether User 1 or User 3 could access the files without the proper SAS tokens. The provided answer reflects the most likely outcome based on the information and general Azure security principles.
110
**** [View Question](https://www.examtopics.com/discussions/databricks/view/129702-exam-az-104-topic-2-question-100-discussion/) You have a Microsoft Entra tenant that contains the groups shown in the following table. ![Image](https://img.examtopics.com/az-104/image799.png) The tenant contains the users shown in the following table. ![Image](https://img.examtopics.com/az-104/image800.png) Which users and groups can you delete? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image801.png) **
** You can delete all users (User1, User2, User3, and User4). You can delete Group 2 and Group 4. **Explanation:** The discussion shows a consensus that all users can be deleted regardless of license assignment (the license is simply freed). However, there is disagreement regarding groups. While some users claim that only groups without active license assignments (Group 2 and Group 4) can be deleted, others suggest that deleting a group with assigned licenses results in an error ("Failed to delete group. Details: The group has an active license. So it cannot be deleted"). The provided suggested answer image supports the latter interpretation. Therefore, the most accurate answer is that you can delete all users but only groups without active license assignments. The conflicting opinions in the discussion highlight this ambiguity. **Why other options are incorrect (implicitly):** The discussion does not provide enough information to definitively confirm which groups are deletable. The suggested answer image provided may not be entirely accurate, given the conflicting accounts within the discussion thread. If a group has an active license assignment, attempting to delete it will likely fail.
111
**** [View Question](https://www.examtopics.com/discussions/databricks/view/129729-exam-az-104-topic-4-question-104-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-104/image809.png) The subscription contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-104/image810.png) The subscription contains the Azure App Service web apps shown in the following table. ![Image](https://img.examtopics.com/az-104/image811.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image812.png) **
** YNY * **Statement 1 (Yes):** This statement is true. Virtual network integration allows an Azure App Service app to access resources within its integrated VNet and also resources in peered VNets. Since VNet1 and VNet2 are peered, App Service Plan1 (in VNet2) can communicate with VM1 (in VNet1). * **Statement 2 (No):** This statement is false. Virtual network integration isn't solely for outbound calls; it enables bidirectional communication between the app service and resources within the virtual network and peered networks. * **Statement 3 (No):** This statement is false, but there is a disagreement in the discussion. While the initial highly-voted answer claims it is false due to limitations with isolated plans, SDiwan correctly points out that isolated apps are *already* within a subnet of a VNet. Because VNet2 (where the isolated app resides) is peered with VNet1, the app *can* communicate with resources in VNet1. The need for VNet integration is bypassed because the app is natively within the VNet. **Why other options are incorrect:** The suggested answer in the original post (YNN) is incorrect because statement 3 is true when considering the context of isolated App Service plans and VNet peering. The discussion highlights the conflict and clarifies this point.
112
[View Question](https://www.examtopics.com/discussions/databricks/view/129750-exam-az-104-topic-5-question-150-discussion/) You have an Azure subscription. You are creating a new Azure container instance that will have the following settings: • Container name: cont1 • SKU: Standard • OS type: Windows • Networking type: Public • Memory (GiB): 2.5 • Number of CPU cores: 2 You discover that the Private setting for Networking type is unavailable. You need to ensure that cont1 can be configured to use private networking. Which setting should you change? A. Memory (GiB) B. Networking type C. Number of CPU cores D. OS type E. SKU
The suggested answer is B, but the discussion reveals conflicting opinions and outdated information. Initially, the answer was thought to be B (change the Networking type), however, later comments suggest that private networking for Windows containers is now supported, making the original premise of the question potentially inaccurate. Some users claim that changing the OS type (Option D) was the correct solution in the past, due to limitations with private networking for Windows containers. The current consensus is that there is no longer a need to change the OS type or any other setting to enable private networking for Windows containers. Why other options are incorrect: * **A. Memory (GiB):** Memory allocation is unrelated to networking type. * **C. Number of CPU cores:** The number of CPU cores doesn't affect networking configuration. * **D. OS type:** While initially believed to be the correct answer due to past limitations, this is now outdated. Private networking is supported for Windows containers. * **E. SKU:** The SKU (pricing tier) does not dictate networking capabilities. Note: The question and its suggested answer appear to be based on outdated information regarding Azure Container Instances. The discussion highlights that the ability to use private networking with Windows containers has changed over time.
113
[View Question](https://www.examtopics.com/discussions/databricks/view/130127-exam-az-104-topic-4-question-106-discussion/) You have an Azure subscription that contains an Azure container registry named ContReg1. You enable the Admin user for ContReg1. Which username can you use to sign in to ContReg1? A. root B. admin C. administrator D. ContReg1
D. ContReg1 Explanation: When the admin user is enabled for an Azure Container Registry (ACR), the username used to sign in is the registry's name. Since the registry name is ContReg1, the correct username is ContReg1. This is confirmed by multiple users in the discussion who tested this in a lab environment. They found that hovering over the "Mark" in the Azure portal displayed an explanation stating that the admin user is identical to the container registry's name. Why other options are incorrect: * **A. root:** This is a common username for administrative access in many systems, but it's not the username for the enabled admin user in Azure Container Registry. * **B. admin:** While "admin" is a common administrative username, it's not the specific username used in this Azure Container Registry scenario. * **C. administrator:** Similar to option B, this is a generic administrative username and not the correct one for this specific context. Note: There is some disagreement in the discussion. Some users initially suggested options B or C, but the majority and the most highly upvoted responses confirm that the correct answer is D, based on practical testing within the Azure portal.
114
[View Question](https://www.examtopics.com/discussions/databricks/view/130133-exam-az-104-topic-4-question-105-discussion/) You have an Azure subscription named Subscription1 that contains the resources shown in the following table. | Resource Group | Location | Resource Type | Resource Name | | -------------- | ------------- | -------------- | -------------- | | RG1 | West Europe | Recovery Services Vault | Vault1 | | RG2 | East US | Recovery Services Vault | Vault2 | You create virtual machines in Subscription1 as shown in the following table. | VM Name | Location | Resource Group | OS | | ------- | ------------- | -------------- | ------------- | | VM1 | West Europe | RG3 | Windows | | VM2 | East US | RG3 | Windows | | VM3 | West Europe | RG4 | Linux | | VMA | West Europe | RG4 | Windows | | VMB | East US | RG4 | Linux | | VMC | West Europe | RG3 | Linux | You plan to use Vault1 for the backup of as many virtual machines as possible. Which virtual machines can be backed up to Vault1? A. VM1 only B. VM3 and VMC only C. VM1, VM2, VM3, VMA, VMB, and VMC D. VM1, VM3, VMA, and VMC only E. VM1 and VM3 only
D. VM1, VM3, VMA, and VMC only Explanation: The key constraint is that the Recovery Services Vault (Vault1) and the virtual machines must reside in the same Azure region for backups. Vault1 is located in West Europe. Therefore, only the VMs located in West Europe (VM1, VM3, VMA, and VMC) can be backed up to it. VM2 and VMB are in East US and cannot be backed up to the West Europe vault. The resource group of the VMs and the vault are irrelevant to the backup process, only the region matters. Why other options are incorrect: * **A. VM1 only:** This is incorrect because VM3, VMA, and VMC are also in West Europe and can be backed up to Vault1. * **B. VM3 and VMC only:** This is incorrect because VM1 and VMA are also eligible for backup. * **C. VM1, VM2, VM3, VMA, VMB, and VMC:** This is incorrect because VM2 and VMB are in a different region (East US). * **E. VM1 and VM3 only:** This is incorrect because VMA and VMC are also in West Europe and can be backed up to Vault1. Note: The discussion shows unanimous agreement on the correct answer.
115
[View Question](https://www.examtopics.com/discussions/databricks/view/130228-exam-az-104-topic-2-question-101-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image803.png) You need to ensure that data transfers between storage1 and VM1 do NOT traverse the internet. What should you configure for storage1? A. data protection B. a private endpoint C. Public network access in the Firewalls and virtual networks settings D. a shared access signature (SAS)
The correct answer is **B. a private endpoint**. A private endpoint provides a private IP address within your virtual network (VNet). Traffic between VM1 and storage1 will then use this private IP address and travel solely over the Microsoft backbone network, avoiding the public internet. Why other options are incorrect: * **A. data protection:** Data protection mechanisms don't directly control the network path of data transfers. * **C. Public network access in the Firewalls and virtual networks settings:** This setting controls whether the storage account is accessible from the public internet. Disabling public access *might* seem like a solution, but it doesn't guarantee that traffic will use a private connection; it merely restricts public access. A private endpoint is required for guaranteed private communication. * **D. a shared access signature (SAS):** SAS provides access control to your storage account but does not dictate the network path the data takes. Traffic can still traverse the public internet even with a SAS. Note: The discussion section shows overwhelming agreement on the correct answer.
116
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146147-exam-az-104-topic-3-question-92-discussion/) Your network contains an on-premises Active Directory Domain Services (AD DS) domain. The domain contains the identities shown in the following table. ![Image](https://img.examtopics.com/az-104/image836.png) *(Image contains a table showing User1 in OU2 and User2 in OU1. Group1 is in OU1 and contains User1.)* You have an Azure subscription that contains a storage account named storage1. The file shares in storage1 have an identity source of AD DS and Default share-level permissions set to Enable permissions for all authenticated users and groups. You create an Azure Files share named share1 that has the roles shown in the following table. ![Image](https://img.examtopics.com/az-104/image837.png) *(Image shows User2 has Owner permissions on share1, and Group1 has Contributor permissions.)* You have a Microsoft Entra tenant that contains a cloud-only user named User3. You use Microsoft Entra Connect to sync OU1 from the AD DS domain to the Microsoft Entra tenant. For each of the following statements, select Yes if the statement is true. Otherwise, select No. ![Image](https://img.examtopics.com/az-104/image838.png) *(Image contains three statements: Can User1 access share1? Can User2 access share1? Can User3 access share1?)* **
** The correct answer is **No, Yes, No**. * **User1:** No. User1 is located in OU2, which is not synchronized with the Azure AD tenant. Therefore, User1's identity is not available in Azure and cannot access share1, even though they are a member of Group1 which has contributor access. The synchronization of OU1 only includes identities and groups within OU1. * **User2:** Yes. User2 is in OU1, which is synchronized. User2 also has Owner permissions on share1. Therefore, User2 can access share1. * **User3:** No. User3 is a cloud-only user and is not part of the on-premises AD sync. Default share-level permissions are enabled for authenticated users, but User3 is not authenticated through AD DS. **Why other options are incorrect:** The discussion shows a range of opinions, with some users initially believing User1 might have access due to Group1 membership. However, the consensus and the most accurate interpretation of the scenario focuses on the critical factor of OU2 *not* being synced to Azure AD. This directly prevents User1's identity from being present in Azure to utilize the permissions granted to Group1. The suggestion to test this scenario in ChatGPT is an effective approach to gain additional clarity.
117
[View Question](https://www.examtopics.com/discussions/databricks/view/146178-exam-az-104-topic-3-question-94-discussion/) You have an Azure Storage account named storage1. You need to enable a user named User1 to list and regenerate storage account keys for storage1. Solution: You assign the Reader and Data Access role to User1. Does this meet the goal? A. Yes B. No
B. No The correct answer is B because assigning the Reader and Data Access roles to User1 does *not* grant the necessary permissions to list and regenerate storage account keys. The consensus among users in the discussion is that the "Storage Account Key Operator" role is required for this action. The Reader and Data Access roles provide access to storage account data, but not to management operations like key regeneration. Why other options are incorrect: * **A. Yes:** This is incorrect. As explained above, the Reader and Data Access roles lack the required permissions. There is some disagreement in the discussion with one user suggesting the "Storage Account Encryption Scope Contributor" role. However, the majority and most upvoted comments support the "Storage Account Key Operator" role as the correct solution. This answer reflects the prevailing and seemingly more accurate consensus.
118
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146190-exam-az-104-topic-3-question-70-discussion/) You have an Azure Storage account named storage1. You need to enable a user named User1 to list and regenerate storage account keys for storage1. Solution: You assign the Storage Account Encryption Scope Contributor Role to User1. Does this meet the goal? A. Yes B. No **
** B. No The Storage Account Encryption Scope Contributor role only allows management of encryption scopes within a storage account. It does *not* grant permissions to list or regenerate storage account keys. To achieve the goal, the Storage Account Key Operator role or the Storage Account Contributor role (which includes key management permissions) should be assigned to User1. The discussion shows unanimous agreement on this point.
119
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146209-exam-az-104-topic-5-question-104-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. ![Image](https://img.examtopics.com/az-104/image845.png) You have the peering options shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image846.png) You need to design a communication strategy for the resources on the virtual networks. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image847.png) **
** The correct answer is N N N (No, No, No). Based on the provided discussion and the limitations of Azure VNet peering, none of the statements in the image ([Image](https://img.examtopics.com/az-104/image847.png)) are true. Azure VNet peering does not allow connections between: * **Azure Government and Public Azure:** These are separate cloud environments. * **Azure Government and Azure China:** These are separate cloud environments. * **Azure China and Public Azure:** These are separate cloud environments. Peering is only possible within the same Azure cloud environment (Public Azure, Azure Government, or Azure China). The statements imply cross-environment peering which is not supported. **Why other options are incorrect:** Any option other than N N N would incorrectly suggest that peering is possible across different Azure cloud environments (Public, Government, China), which is explicitly stated as unsupported in the discussion. The discussion highlights that Azure global regions and Azure China regions are physically disconnected and therefore cannot be peered. **Note:** While the suggested answer image ([Image](https://img.examtopics.com/az-104/image937.png)) is not shown in the provided text, it's highly likely it also reflects N N N, based on the consensus of the discussion. There is no evidence of disagreement on this core point in the provided discussion.
120
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146210-exam-az-104-topic-5-question-130-discussion/) You have two Azure subscriptions named Sub1 and Sub2 that are linked to separate Microsoft Entra tenants. ![Image](https://img.examtopics.com/az-104/image844.png) You have the virtual networks shown in the following table. Which virtual networks can you peer with VNet1? A. VNet2 only B. VNet2 and VNet3 only C. VNet2 and VNet4 only D. VNet2, VNet3, and VNet4 only E. VNet2, VNet3, VNet4, and VNet5 **
** E. VNet2, VNet3, VNet4, and VNet5 Virtual network peering allows VNets in different subscriptions and even different tenants to be peered together, provided that their address spaces don't overlap. Based on the provided image (which is missing from this text transcription but referenced in the original URL), it appears that none of the listed VNets have overlapping address spaces with VNet1. Therefore, VNet1 can be peered with all the other VNets listed (VNet2, VNet3, VNet4, and VNet5). **Why other options are incorrect:** Options A, B, C, and D incorrectly limit the number of VNets that can be peered with VNet1. The provided information and the discussion indicate that cross-tenant and cross-subscription peering is possible *if* there are no address space conflicts. Since the image (not included here) presumably shows no address space conflicts, all VNets are valid peering candidates. **Note:** The discussion includes a comment mentioning that VNets cannot be peered if they have overlapping address spaces. This is a correct statement and a crucial consideration in real-world scenarios. However, based solely on the information provided in the question and the assumption that the image shows no overlapping address spaces, option E is the most accurate answer.
121
[View Question](https://www.examtopics.com/discussions/databricks/view/146212-exam-az-104-topic-5-question-133-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image843.png) You create a route table named RT1 in the East US Azure region. To which resources can you associate RT1? A. VNet1 only B. Subnet1 only C. VNet1 and NIC1 only D. Subnet1 and NIC1 only E. VNet1, Subnet1, and NIC1
B. Subnet1 only Explanation: In Azure, route tables are associated with subnets, not directly with virtual networks (VNets) or network interface cards (NICs). A route table associated with a subnet applies its routing rules to all NICs within that subnet. Therefore, only Subnet1 can have RT1 associated with it. Why other options are incorrect: * **A. VNet1 only:** Route tables are not associated with VNets directly. * **C. VNet1 and NIC1 only:** Incorrect because route tables are not associated with VNets or NICs directly. * **D. Subnet1 and NIC1 only:** Incorrect because you cannot associate a route table with a NIC directly; it's always at the subnet level. * **E. VNet1, Subnet1, and NIC1:** Incorrect for the reasons stated above. Note: The discussion shows a consensus that the correct answer is B. There is no significant disagreement on this point.
122
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146214-exam-az-104-topic-2-question-107-discussion/) DRAG DROP - You have a Microsoft Entra tenant. You need to ensure that when a new Microsoft 365 group is created, the group name is automatically formatted as follows: ![Image](https://img.examtopics.com/az-104/image851.png) *(Image shows: ``)* Which three actions should you perform in sequence in the Microsoft Entra admin center? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. ![Image](https://img.examtopics.com/az-104/image849.png) *(Image shows a list of actions to choose from. The exact actions are not visible in the provided text, only the suggested answer image below reveals them.)* **
** The correct sequence of actions is: 1. **Create a group naming policy:** This initiates the process of defining how group names will be formatted. 2. **Set Add prefix to Attribute:** This specifies that the prefix of the group name will be derived from a user attribute, not a hardcoded string. 3. **Set Select type to Department:** This selects the "Department" attribute as the source for the prefix, resulting in the `` format. **Explanation:** The goal is to automatically prepend the department to each new Microsoft 365 group name. A group naming policy allows for customized naming conventions. Setting the prefix to an attribute (rather than a static string) allows for dynamic naming based on user properties. Finally, selecting "Department" as the attribute ensures the department name is used as the prefix. **Why other options are incorrect:** While the provided discussion mentions options like "Set Add suffix to String," this is not necessary to achieve the specified `` format. The focus is on the prefix, not a suffix. There is some disagreement in the discussion on the exact wording of the third step, but the overall consensus points to selecting the "Department" attribute.
123
[View Question](https://www.examtopics.com/discussions/databricks/view/146215-exam-az-104-topic-5-question-139-discussion/) You have an Azure virtual network named VNet1 that contains the following settings: • IPv4 address space: 172.16.10.0/24 • Subnet name: Subnet1 • Subnet address range: 172.16.10.0/25 What is the maximum number of virtual machines that can connect to Subnet1? A. 24 B. 25 C. 123 D. 128 E. 251
C. 123 A /25 subnet means 25 bits are used for the network address, leaving 7 bits for host addresses (32 total bits - 25 network bits = 7 host bits). 27 = 128 possible addresses. However, the first address is the network address, and the last address is the broadcast address, which are not usable for VMs. Additionally, Azure typically reserves at least one more IP address. Therefore, 128 - 3 = 125 usable IP addresses. However, the discussion shows some disagreement on the number of reserved IPs (some users claim 2 and others 3). The most common answer reflected in the discussion, and the closest answer option, is 123. Why other options are incorrect: * **A. 24:** Incorrect calculation of available IP addresses. * **B. 25:** Incorrect calculation of available IP addresses. * **D. 128:** Doesn't account for reserved IP addresses (network address, broadcast address, and potentially others reserved by Azure). * **E. 251:** Incorrect calculation based on a misunderstanding of subnet masks and IP addressing. Note: There is some minor disagreement in the provided discussion regarding the exact number of reserved IP addresses (2 vs 3). The answer provided reflects the most common interpretation and the closest option available.
124
[View Question](https://www.examtopics.com/discussions/databricks/view/146241-exam-az-104-topic-4-question-112-discussion/) HOTSPOT You have an Azure subscription that uses Azure Container Instances. You have a computer that has Azure Command-Line Interface (CLI) and Docker installed. You create a container image named image1. You need to provision a new Azure container registry and add image1 to the registry. Which command should you run for each requirement? To answer, select the options in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image833.png) (Image content not provided, but the context suggests a multiple choice question related to Azure CLI commands).
The correct commands are `az acr create` and `docker push`. `az acr create` is used to create a new Azure Container Registry. `docker push` is used to push the locally created container image (`image1` in this case) to the newly created registry. The exact syntax for `docker push` would require the container registry name and the image name, but the core command is correct. The discussion shows a strong consensus on this answer. Multiple users independently suggest these commands as the correct solution. Other options are not provided in the original question. Therefore, no other options can be evaluated for correctness.
125
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146244-exam-az-104-topic-5-question-93-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. ![Image](https://img.examtopics.com/az-104/image854.png) The subscription contains a storage account named contoso2024 as shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image855.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. ![Image](https://img.examtopics.com/az-104/image856.png) **
** YYN * **Statement 1 (VM1): Yes.** VM1 is in VNet1/Subnet1, which is explicitly allowed access to the storage account according to the firewall rules shown in image855.png. Since its public IP is also listed, it can access the storage account. * **Statement 2 (VM2): Yes.** Although VM2 is in a different subnet (Subnet2), its public IP address is listed in the storage account's firewall rules. This allows it to access the storage account. * **Statement 3 (VM3): No.** While VM3 shares a subnet (Subnet1) with VM1, and its public IP address is in the firewall rules, the question implies a *requirement* for access beyond merely permission. The phrasing suggests that the VM *must* be able to access it, implying a direct connection without relying solely on public IP. The discussion highlights disagreement on this point. There is no guarantee that VM3 will be able to access the storage account, as other factors (network configuration, etc.) might interfere, even if it is permitted. **Why other options are incorrect:** The discussion shows a variety of opinions (YYN, YNN), highlighting the ambiguity in the question's wording regarding statement 3. The answer provided leans toward the interpretation that explicitly states a requirement for access, which could include many conditions beyond just firewall rules and subnet.
126
[View Question](https://www.examtopics.com/discussions/databricks/view/146246-exam-az-104-topic-5-question-99-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image853.png) You create a public IP address named IP1. Which two resources can you associate to IP1? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. VM1 B. LB1 C. NIC1 D. VPN1 E. VNet1
The correct answer is **B and C (LB1 and NIC1)**. A public IP address in Azure can be associated with a Network Interface Card (NIC) to provide a VM with public internet access, and it can also be associated with a Load Balancer (LB) to provide external access to the VMs behind it. * **B. LB1 (Load Balancer):** Load balancers distribute incoming traffic across multiple VMs. To be accessible from the internet, the load balancer needs a public IP address. * **C. NIC1 (Network Interface Card):** A NIC is the network interface for a virtual machine. Associating a public IP address with a NIC allows the VM to directly communicate with the internet. * **A. VM1 (Virtual Machine):** A VM doesn't directly associate with a public IP; it's the NIC associated with the VM that gets the public IP. * **D. VPN1 (VPN Gateway):** A VPN gateway requires its *own* public IP address, separate from the IP1 created in this question. It doesn't share a public IP with other resources. * **E. VNet1 (Virtual Network):** A virtual network is a logical grouping of resources and does not directly get a public IP. Individual VMs or load balancers within the VNet can. The discussion shows some disagreement about whether VPN gateways can share a public IP, but the consensus and documented Azure best practice is that they have their own dedicated public IP addresses.
127
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146247-exam-az-104-topic-2-question-109-discussion/) HOTSPOT - You have an Azure subscription that contains the resources shown in the following table. ![Image](https://img.examtopics.com/az-104/image862.png) *(Image content not provided, but its presence is noted)* You plan to use an Azure key vault to provide a secret to app1. What should you create for app1 to access the key vault, and from which key vault can the secret be used? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image863.png) *(Image content not provided, but its presence is noted)* **
** The provided text does not give the content of the images showing the table of Azure resources and the answer area. Therefore, a definitive answer cannot be provided. The question requires selecting options from a multiple choice style answer area, based on the provided table of Azure resources. The correct answer would involve selecting the appropriate Key Vault and the correct type of access mechanism (likely a managed identity or service principal) for app1 to access the secret stored within that Key Vault. The discussion indicates that the "suggested answer" is incorrect, highlighting a potential ambiguity or error in the original question or its suggested solution. More information is needed to answer this question completely and correctly. **Why other options are incorrect:** It is impossible to determine which options are incorrect without access to the contents of the missing images which would display the options. This is a limitation of the provided context.
128
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146248-exam-az-104-topic-5-question-100-discussion/) You have an Azure subscription that contains a storage account named storage1. You need to allow access to storage1 from selected networks and your home office. The solution must minimize administrative effort. What should you do first for storage1? A. Add a private endpoint. B. Modify the Public network access settings. C. Select Internet routing. D. Modify the Access Control (IAM) settings. **
** B. Modify the Public network access settings. **Explanation:** The question prioritizes minimizing administrative effort while allowing access from specific networks and a home office. Modifying the public network access settings directly controls which networks (and thus, IP addresses) can access the storage account. This is a simpler, quicker solution than setting up a private endpoint (option A), which involves more configuration. Option C (Internet routing) would open the storage account to the entire internet, negating the requirement for selective access. Option D (modifying IAM settings) manages user access, not network access; it doesn't address the need to restrict access to specific networks. **Why other options are incorrect:** * **A. Add a private endpoint:** While a private endpoint provides secure access, it's more complex to set up than adjusting public network access settings. It's not the most efficient solution when minimizing administrative effort is the priority. * **C. Select Internet routing:** This option makes the storage account publicly accessible, directly contradicting the requirement of allowing access only from selected networks. * **D. Modify the Access Control (IAM) settings:** IAM controls user-level access, not network-level access. It doesn't address the core issue of restricting access based on network location. **Note:** The discussion shows some disagreement on the best answer, with one user suggesting "Add a private endpoint" (A). However, based on the prioritization of minimizing administrative effort, modifying the public network access settings is the more efficient approach initially. A private endpoint could be considered later for enhanced security if needed.
129
[View Question](https://www.examtopics.com/discussions/databricks/view/146249-exam-az-104-topic-5-question-129-discussion/) You have an Azure subscription. You create a routing table named RT1. You need to add a route to RT1 that specifies the next hop IP address. Which next hop type should you select? A. Internet B. Virtual network gateway C. Virtual network D. Virtual appliance
D. Virtual appliance Explanation: The correct answer is a virtual appliance because the question specifies the need to add a route with a *specific next hop IP address*. A virtual appliance represents a custom device with its own IP address, making it suitable for this scenario. The other options are incorrect because they don't directly support specifying an arbitrary next hop IP address. A virtual network gateway, virtual network, or the internet would use other routing mechanisms. Why other options are incorrect: * **A. Internet:** Routing to the internet doesn't involve specifying a single next-hop IP address; it uses gateway addresses and routing protocols. * **B. Virtual network gateway:** While a gateway has an IP, it's used for cross-premises connectivity or VNet-to-VNet connections, not arbitrary IP addresses. * **C. Virtual network:** This is a network segment, not a specific next hop device. Note: The discussion shows unanimous agreement on the answer being D, although some users found the question vaguely worded.
130
[View Question](https://www.examtopics.com/discussions/databricks/view/146258-exam-az-104-topic-4-question-81-discussion/) You have an Azure subscription that contains three virtual machines named VM1, VM2, and VM3. All the virtual machines are in an availability set named AVSet1. You need to scale up VM1 to a new virtual machine size, but the intended size is unavailable. What should you do first? A. Create a proximity placement group. B. Deallocate VM1. C. Convert AvSet1 into a managed availability set. D. Shut down VM3 and VM3.
The correct answer is **B. Deallocate VM1.** The reason is that if the desired VM size is unavailable, it's likely because the current hardware resources allocated to VM1 are already in use and don't support the requested size. Deallocating VM1 releases those resources, allowing Azure to place it on a host that has the necessary capacity for the new size. Only then can the resizing operation be attempted successfully. Why other options are incorrect: * **A. Create a proximity placement group:** Proximity placement groups improve network latency between VMs but don't address resource availability issues related to VM size. * **C. Convert AvSet1 into a managed availability set:** Converting to a managed availability set doesn't directly solve the problem of the target VM size being unavailable. It handles high availability but not resource allocation. * **D. Shut down VM3 and VM3:** This option is nonsensical (VM3 is listed twice) and doesn't directly address the issue of VM1's desired size being unavailable. Shutting down other VMs might *indirectly* free up resources, but deallocating VM1 is the more direct and correct first step. Note: While some discussion suggests stopping the VM might also reveal more sizes, the consensus and most direct solution indicated is deallocating VM1.
131
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146259-exam-az-104-topic-4-question-85-discussion/) You have an Azure container registry named Registry1 that contains an image named image1. You receive an error message when you attempt to deploy a container instance by using image1. You need to be able to deploy a container instance by using image1. Solution: You set Admin user to Enable for Registry1. Does this meet the goal? A. Yes B. No **
** A. Yes Enabling the admin user for Registry1 provides the necessary credentials for authentication when pulling images. If the deployment failure was due to authentication issues (a common cause), enabling the admin user would resolve the problem, allowing successful deployment of the container instance using image1. **Why other options are incorrect:** B. No is incorrect because enabling the admin user *could* solve the problem if the root cause of the error is authentication failure. The discussion highlights a disagreement on whether authentication is *assumed* to be the problem. While it's not explicitly stated in the question, several users assume authentication is the issue and that enabling the admin user would resolve it. **Note:** The discussion reveals disagreement on whether enabling the admin user is definitively the correct solution. Some users argue that without knowing the specific error type, it's impossible to say for certain. The answer given is based on the most prevalent assumption within the discussion, which is that authentication is the probable cause of the deployment failure.
132
[View Question](https://www.examtopics.com/discussions/databricks/view/146262-exam-az-104-topic-4-question-89-discussion/) HOTSPOT - You have an Azure container registry named contoso2023 as shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image840.png) You need to enable contoso2023 to use a dedicated data endpoint. Which two settings should you configure for contoso2023? To answer, select the appropriate settings in the answer area. NOTE: Each correct answer is worth one point. ![Image](https://img.examtopics.com/az-104/image841.png)
The correct answer is **Properties** and **Networking**. To enable a dedicated data endpoint in Azure Container Registry, you must first upgrade the SKU to Premium in the **Properties** section. Then, within the **Networking** section, you configure private endpoint access to disable public network access. The discussion shows a consensus on this solution, although there is some initial confusion and mention of other options like "Overview" instead of "Properties". The suggested answer image also supports this solution. Other Options: The discussion mentions "Overview" as an alternative to "Properties," which could be considered confusing given that the SKU change is done in the "Properties" section. "Connected registries" is mentioned but is irrelevant to enabling a dedicated data endpoint, and may be deprecated.
133
[View Question](https://www.examtopics.com/discussions/databricks/view/146271-exam-az-104-topic-3-question-86-discussion/) You have an Azure Storage account named storage1. You need to enable a user named User1 to list and regenerate storage account keys for storage1. Solution: You assign the Storage Account Key Operator Service Role to User1. Does this meet the goal? A. Yes B. No
A. Yes The Storage Account Key Operator role explicitly allows users to list and regenerate storage account keys. This directly addresses the requirement of enabling User1 to perform these actions on storage1. Multiple users in the discussion confirmed this solution. Other options are incorrect because: * **B. No:** This is incorrect because the Storage Account Key Operator role is specifically designed for this purpose, as confirmed by the provided Microsoft documentation links within the discussion.
134
[View Question](https://www.examtopics.com/discussions/databricks/view/146282-exam-az-104-topic-4-question-109-discussion/) You plan to deploy several Azure virtual machines that will run Windows Server 2022 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. Azure Custom Script Extension B. Deployment Center in Azure App Service C. Microsoft Entra Application Proxy D. the Publish-AzVMDscConfiguration cmdlet
A. Azure Custom Script Extension The Azure Custom Script Extension allows for post-deployment configuration tasks, including software installation. This directly addresses the requirement of installing NGINX on the VMs after deployment. Why other options are incorrect: * **B. Deployment Center in Azure App Service:** This is used for deploying web applications, not for managing the configuration of individual VMs within a virtual machine scale set. * **C. Microsoft Entra Application Proxy:** This is for providing secure access to on-premises applications, not for installing software on VMs. * **D. the Publish-AzVMDscConfiguration cmdlet:** While DSC (Desired State Configuration) can achieve the desired outcome, the question asks what *should* be used, and the Custom Script Extension is a more straightforward and commonly used solution for this task. The `Publish-AzVMDscConfiguration` cmdlet only publishes the DSC script; it doesn't execute it. One user suggested the Desired State Configuration extension as an alternative correct answer. Note: The discussion shows some disagreement on the best approach, with some users suggesting the Desired State Configuration extension as an alternative correct answer. However, based on the context of the question and the most common understanding, the Custom Script Extension is a more suitable and practical solution.
135
[View Question](https://www.examtopics.com/discussions/databricks/view/146292-exam-az-104-topic-5-question-150-discussion/) You have an Azure subscription that contains two peered virtual networks named VNet1 and VNet2. You have a Network Virtual Appliance (NVA) named NetVA1. You need to ensure that the traffic from VNet1 to VNet2 is inspected by using NetVA1. What should you use? A. a local network gateway B. a route table that has custom routes C. a service endpoint D. IP address reservations
B. a route table that has custom routes Explanation: To force all traffic from VNet1 destined for VNet2 to pass through NetVA1, you need to configure a route table in VNet1. This route table will contain a custom route that directs all traffic intended for VNet2's address space to the NetVA1's internal IP address. The traffic will be routed through the NVA for inspection before reaching VNet2. Why other options are incorrect: * **A. a local network gateway:** Local network gateways are used to connect on-premises networks to Azure virtual networks. They are not relevant for routing traffic between peered virtual networks within Azure. * **C. a service endpoint:** Service endpoints are used to allow virtual networks secure access to Azure services without exposing them to the public internet. They don't directly control routing between virtual networks. * **D. IP address reservations:** IP address reservations are used to statically assign IP addresses within a subnet. This doesn't affect routing decisions. Note: While the suggested answer is consistently B, the discussion mentions the need for a "default route to NVA," which is a nuance not fully captured in the question. A truly comprehensive solution might require a default route to the NVA if there isn't already a pre-existing route to VNet2. However, based solely on the provided text, option B is the most directly applicable answer.
136
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146299-exam-az-104-topic-2-question-102-discussion/) You have a Microsoft Entra tenant that is linked to the subscriptions shown in the following table. ![Image](https://img.examtopics.com/az-104/image818.png) *(Image shows Subscription1, Subscription2, Subscription3)* You have the resource groups shown in the following table. ![Image](https://img.examtopics.com/az-104/image819.png) *(Image shows RG1 in Subscription1, RG2 in Subscription2, RG3 in Subscription3)* You assign roles to users as shown in the following table. ![Image](https://img.examtopics.com/az-104/image820.png) *(Image shows User1 assigned Contributor role at the Tenant group level, User2 assigned Storage Account Contributor role to storage1, and User3 assigned User Access Administrator role at the Tenant group level.)* For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image821.png) *(Image contains three statements: 1. User 1 can resize VM1. 2. User 2 can create a new storage account in RG1. 3. User 3 can assign roles to User 1.)* **
** The correct answers are No, No, Yes. 1. **No:** User 1 is a Contributor at the Tenant group level. While a Contributor role *can* resize VMs, User 1's scope is limited to the Azure AD Tenant, not to specific resources within subscriptions. VM1 resides in RG3 within Subscription3, outside of User 1's scope. 2. **No:** User 2 is a Storage Account Contributor with access only to `storage1`. This permission only allows management of resources *within* `storage1`. Creating a new storage account in RG1 requires broader permissions than what User 2 possesses. 3. **Yes:** User 3 is a User Access Administrator at the Tenant group level. This grants them the ability to manage user access across all subscriptions and resource groups within the tenant, including assigning roles to User 1. **Why other options are incorrect:** The discussion shows some disagreement on the first statement; some users believe User 1 has the necessary permissions while others correctly point out the scope limitation prevents it. The answer provided here reflects the technically correct interpretation based on Azure role-based access control (RBAC) principles.
137
[View Question](https://www.examtopics.com/discussions/databricks/view/146300-exam-az-104-topic-2-question-105-discussion/) You have an Azure subscription that contains a storage account named storage1. You need to ensure that the access keys for storage1 rotate automatically. What should you configure? A. a backup vault B. redundancy for storage1 C. lifecycle management for storage1 D. an Azure key vault E. a Recovery Services vault
D. an Azure key vault Azure Key Vault is a service specifically designed for managing cryptographic keys, certificates, and other secrets. It provides features to automate the rotation of storage account keys, ensuring enhanced security. The other options are not directly related to automatic key rotation for storage accounts. Why other options are incorrect: * **A. a backup vault:** Backup vaults are used for backing up data, not for managing access keys. * **B. redundancy for storage1:** Redundancy ensures data availability but doesn't address key rotation. * **C. lifecycle management for storage1:** Lifecycle management deals with the lifecycle of storage objects (blobs, files, etc.), not access keys. * **E. a Recovery Services vault:** Recovery Services vaults are used for disaster recovery and backup, not key management. Note: The discussion shows unanimous agreement on the correct answer, D.
138
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146301-exam-az-104-topic-2-question-106-discussion/) You have an Azure subscription that contains the Microsoft Entra identities shown in the following table. ![Image](https://img.examtopics.com/az-104/image835.png) You need to enable self-service password reset (SSPR). For which identities can you enable SSPR in the Azure portal? A. User1 only B. Group1 only C. User1 and Group1 only D. Group1 and Group2 only E. User1, Group1, and Group2 **
** D. Group1 and Group2 only The discussion reveals conflicting opinions on whether SSPR can be enabled for individual users or only groups. The suggested answer and several upvoted comments support the idea that SSPR can only be applied to groups (either security groups or M365 groups), making option D the correct answer based on this interpretation. While some commenters claim SSPR can be applied to individual users, there's no definitive evidence or consensus provided within the discussion to support this. **WHY OTHER OPTIONS ARE INCORRECT:** * **A. User1 only:** Incorrect because the prevailing opinion in the discussion suggests SSPR is not assignable to individual users. * **B. Group1 only:** Incorrect as, according to the dominant view, both Group1 and Group2 are eligible for SSPR configuration. * **C. User1 and Group1 only:** Incorrect because the discussion leans toward excluding individual users (User1) from SSPR configuration. * **E. User1, Group1, and Group2:** Incorrect due to the widely held belief within the discussion that individual users are ineligible for SSPR, leaving only the groups as candidates. **NOTE:** There is a clear disagreement within the discussion regarding whether SSPR can be applied to individual users. The answer provided reflects the majority viewpoint within the provided discussion, but the question's validity might be questionable due to this conflict.
139
[View Question](https://www.examtopics.com/discussions/databricks/view/146302-exam-az-104-topic-2-question-111-discussion/) You have an Azure subscription that contains a storage account named storage1. The storage1 account contains blob data. You need to assign a role to a user named User1 to ensure that the user can access the blob data in storage1. The role assignment must support conditions. Which two roles can you assign to User1? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. Owner B. Storage Account Contributor C. Storage Account Backup Contributor D. Storage Blob Data Contributor E. Storage Blob Data Owner F. Storage Blob Delegator
The correct answer is D and E. * **Storage Blob Data Contributor (D):** This role grants the user read, write, and delete access to blob data. This fulfills the requirement of accessing blob data. * **Storage Blob Data Owner (E):** This role provides full control over blob data, including permission management. Since the question states the role assignment *must* support conditions, this option is also valid as it allows for granular control through conditional access. Options A, B, C, and F are incorrect because: * **Owner (A):** While granting full access, it's overly permissive and doesn't inherently support conditions in the same way as D and E. The question specifically asks for a role that *supports* conditions. * **Storage Account Contributor (B):** This role manages the storage account itself, not specifically the blob data within it. * **Storage Account Backup Contributor (C):** This role is focused on managing backups of the storage account, not accessing the blob data. * **Storage Blob Delegator (F):** This role is not a standard built-in role in Azure, and the provided context doesn't offer information supporting its existence. Note: While the discussion shows some initial disagreement on the correct answer, the consensus among the later comments converges on D and E as the correct solution.
140
[View Question](https://www.examtopics.com/discussions/databricks/view/146303-exam-az-104-topic-2-question-112-discussion/) **** You need to implement the planned change for Attribute1. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image873.png) *(Image text: Box 1: The Attribute Assignment Administrator role can assign Attribute1 to a group. Box 2: The Attribute Definition Administrator role can assign Attribute1 to a user. Box 3: Attribute1 can be assigned to a Microsoft 365 group.)* **
** No, No, Yes **Explanation:** * **Box 1: No.** The discussion explicitly states that the Attribute Definition Administrator (Admin2) cannot assign Attribute1; only the Attribute Assignment Administrator can. * **Box 2: No.** The discussion indicates that the Attribute Definition Administrator role (Admin2) can *define* the attribute, but not assign it. The ability to assign Attribute1 requires the Attribute Assignment Administrator role. * **Box 3: Yes.** While the initial comments suggest assigning Attribute1 to a group isn't possible, a later comment clarifies that custom security attributes *can* be assigned to Microsoft Entra enterprise applications (service principals), which could encompass Microsoft 365 groups. This aligns with the Microsoft documentation cited in the discussion. **Why other options are incorrect:** The initial responses in the discussion reflect some initial confusion and uncertainty about the permissions needed to assign custom attributes. However, further comments and the cited Microsoft documentation clarify the correct permissions and capabilities. The final answer reflects this corrected understanding. Note that there's some disagreement reflected in the initial responses to this question before clarification was given.
141
[View Question](https://www.examtopics.com/discussions/databricks/view/146304-exam-az-104-topic-2-question-113-discussion/) You have a Microsoft Entra tenant configured as shown in the following exhibit. (Image containing tenant configuration details, text not visible). The tenant contains the identities shown in the following table. (Image containing a table of identities: User1, Group1, Group2, text not fully visible). You purchase a Microsoft Fabric license. To which identities can you assign the license? A. User1 only B. User1 and Group1 only C. User1 and Group2 only D. User1, Group1, and Group2
A. User1 only The correct answer is A because the discussion explicitly states that a free Microsoft Entra tenant only allows license assignment to individual users. Group-based licensing requires a paid Microsoft Entra ID P1 or higher license. Since no such license is mentioned, only User1 can be assigned the Microsoft Fabric license. Why other options are incorrect: * **B, C, and D:** These options incorrectly assume that group-based licensing is possible in a free Microsoft Entra tenant. The discussion clearly indicates that this is not the case. Note: The discussion shows some disagreement on the exact requirements, but the prevailing and tested solution points to option A as correct in a free Microsoft Entra tenant.
142
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146306-exam-az-104-topic-3-question-93-discussion/) You have an Azure subscription that contains the storage accounts shown in the following table. | Storage Account | Account Type | Replication Type | |---|---|---| | storage1 | Standard | Locally Redundant Storage (LRS) | | storage2 | Premium | Locally Redundant Storage (LRS) | | storage3 | Standard | Read-access geo-redundant storage (RA-GRS) | Which storage account can be converted to zone-redundant storage (ZRS) replication? A. storage1 only B. storage2 only C. storage3 only D. storage2 and storage3 E. storage1, storage2, and storage3 **
** A. storage1 only LRS (Locally Redundant Storage) can be directly converted to ZRS (Zone-Redundant Storage). While RA-GRS (Read-access geo-redundant storage) can be converted to ZRS, it requires an intermediate step of converting to LRS first. Premium storage accounts also support ZRS, but the question is about conversion, not creating a new account. Therefore, only `storage1` which is LRS, can be directly converted to ZRS. **Why other options are incorrect:** * **B. storage2 only:** While premium storage *supports* ZRS, the question asks which account can be *converted*. There's no direct conversion path from LRS Premium to ZRS Premium indicated in the provided text. * **C. storage3 only:** RA-GRS cannot be directly converted to ZRS. It requires a multi-step process involving a conversion to LRS first. * **D. storage2 and storage3:** Incorrect because of the reasons stated above for options B and C. * **E. storage1, storage2, and storage3:** Incorrect because storage2 and storage3 cannot be directly converted to ZRS. **Note:** The discussion shows some disagreement on the nuances of converting Premium storage and the precise steps involved in migrating RA-GRS to ZRS. The answer provided here is based on the most straightforward interpretation of the question and the available information, acknowledging this underlying uncertainty.
143
[View Question](https://www.examtopics.com/discussions/databricks/view/146307-exam-az-104-topic-3-question-95-discussion/) You have an Azure subscription that contains a Standard SKU Azure container registry named ContReg1. You need to ensure that ContReg1 supports geo-replication. What should you do first for ContReg1? A. Enable Admin user. B. Add a scope map. C. Add an automation task. D. Create a cache rule. E. Upgrade the SKU.
The correct answer is **E. Upgrade the SKU**. Geo-replication is a feature only available in the Premium tier of Azure Container Registry. Therefore, to enable geo-replication for ContReg1 (currently a Standard SKU), the SKU must be upgraded to Premium. Why other options are incorrect: * **A. Enable Admin user:** Enabling an admin user has no bearing on geo-replication capabilities. * **B. Add a scope map:** Scope maps control access to resources within the registry, not its replication features. * **C. Add an automation task:** Automation tasks might be used *after* geo-replication is enabled, but they don't enable the feature itself. * **D. Create a cache rule:** Cache rules optimize download speeds but don't relate to geo-replication. Note: The discussion shows a unanimous agreement on answer E being correct.
144
[View Question](https://www.examtopics.com/discussions/databricks/view/146311-exam-az-104-topic-3-question-91-discussion/) You have an Azure subscription linked to a hybrid Microsoft Entra tenant. The tenant contains the users shown in the following table. ![Image](https://img.examtopics.com/az-104/image824.png) *(Image shows User1 and User2)* You create the Azure Files shares shown in the following table. ![Image](https://img.examtopics.com/az-104/image825.png) *(Image shows Share1, Share2, Share3 with associated storage accounts)* You configure identity-based access for contoso2024 as shown in the following exhibit. ![Image](https://img.examtopics.com/az-104/image826.png) *(Image shows identity-based access configuration for contoso2024)* For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image827.png) *(Image shows statements: 1. User1 can access the content in Share1. 2. User2 can access the content in Share2. 3. User2 can access the content in Share3.)* Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://img.examtopics.com/az-104/image828.png) *(Image shows suggested answers: N, Y, N)*
The suggested answer, N Y N, is the most likely correct answer based on the provided information and discussion. * **Statement 1 (User1 can access Share1): No.** User1 is not synced to the Azure AD, and the configuration explicitly states that solely Azure AD accounts are not supported for this access method. Therefore, User1 lacks the necessary permissions. * **Statement 2 (User2 can access Share2): Yes.** User2 is synced to Azure AD, and the storage account (storage2024) associated with Share2 is configured to use Azure AD for authentication. This allows User2 access. * **Statement 3 (User2 can access Share3): No.** Even though User2 is synced, Share3 is associated with storage2025, which (implicitly) doesn't have Azure AD configured as an authentication source. Therefore, User2 cannot access Share3. **Reasons why other options are incorrect:** The discussion shows some disagreement on the interpretation of "tenant" and whether solely cloud-based Azure AD users could access resources. However, the prevailing and most logically consistent interpretation supports the N Y N answer based on the provided screenshots outlining specific access configuration requirements. The ambiguity around "tenant" is acknowledged, but the configuration screenshots provide more definitive information on the access restrictions in place.
145
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146313-exam-az-104-topic-3-question-96-discussion/) You implement the planned changes for cont2. What is the maximum number of additional access policies you can create for cont2? To answer, select the appropriate options in the answer area. ![Image](https://img.examtopics.com/az-104/image877.png) **
** The maximum number of additional stored access policies is 2. The maximum number of additional legal holds is 0 (already one exists). The question asks about *additional* access policies. The case study states that three stored access policies (Stored1, Stored2, and Stored3) and a legal hold are *already* created for `cont2`. Azure allows a maximum of five stored access policies per container. Since three already exist, only two more can be added. A container can only have one legal hold at a time; therefore, no additional legal holds can be created. **Why other options are incorrect:** The discussion reveals some uncertainty regarding whether the three stored access policies were already implemented or not. However, based on the problem statement explicitly mentioning the existence of Stored1, Stored2, and Stored3, the answer provided accounts for this. Any answer suggesting more than two additional stored access policies would be incorrect because of the 5-policy limit. Similarly, any option suggesting more than zero additional legal holds would be incorrect because only one legal hold is allowed per container.
146
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146314-exam-az-104-topic-6-question-60-discussion/) You have an Azure subscription that contains an Azure Backup vault named Backup1, a Recovery Services vault named Recovery1, and the resources shown in the following table. | Resource Type | Resource Name | |---|---| | Azure Disk | Disk1 | | Azure Virtual Machine | VM1 | | Azure App Service | App1 | | Azure Database for PostgreSQL | DB1 | You plan to back up the resources. Which resource can be backed up to Backup1, and which resource can be backed up to Recovery1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** Backup1: Disk1 Recovery1: VM1 Azure Backup vaults (like Backup1) are designed for backing up storage accounts (including disks, blobs, etc.), databases, and Kubernetes services. Recovery Services vaults (like Recovery1) are primarily used for backing up Azure VMs, file shares, and certain on-premises applications. Therefore, Disk1 (an Azure Disk) can be backed up to Backup1, and VM1 (an Azure Virtual Machine) can be backed up to Recovery1. **Why other options are incorrect:** The discussion shows some disagreement on whether Backup1 can back up VMs (some users incorrectly suggested VM1 could be backed up to Backup1). However, the consensus and most upvoted answers correctly identify Disk1 as backable to Backup1 and VM1 as backable to Recovery1. App1 and DB1 are not backable by the services chosen. App1 would require an Azure App Service backup solution. DB1 could possibly be backed up, but the service chosen for it is more for VMs and not databases directly. **Note:** There is some disagreement in the provided discussion regarding which resources can be backed up to which vault. However, the most widely accepted and technically accurate answer aligns with the suggested answer.
147
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146318-exam-az-104-topic-4-question-110-discussion/) You have an Azure subscription that contains a container group named Group1. Group1 contains two Azure container instances as shown in the following table. ![Image](https://img.examtopics.com/az-104/image832.png) You need to ensure that container2 can use CPU resources without negatively affecting container1. What should you do? A. Increase the resource limit of container1 to three CPUs. B. Increase the resource limit of container2 to six CPUs. C. Remove the resource limit for both containers. D. Decrease the resource limit of container2 to two CPUs. **
** C. Remove the resource limit for both containers. **Explanation:** The correct answer is C because, according to the provided discussion and Microsoft documentation, removing the resource limit ensures that each container instance's maximum resource usage is the same as its resource request. This prevents container2 from exceeding its requested resources and negatively impacting container1. Without a specified limit, the containers are restricted to using only the resources they have requested, thus ensuring fair resource allocation between them. **Why other options are incorrect:** * **A. Increase the resource limit of container1 to three CPUs:** This doesn't address the core problem; container2 might still try to use more resources than allocated. * **B. Increase the resource limit of container2 to six CPUs:** This could potentially negatively impact container1 if container2 tries to utilize all 6 CPUs. It doesn't solve the resource contention problem. * **D. Decrease the resource limit of container2 to two CPUs:** This unnecessarily restricts container2's resources, potentially hindering its performance, and doesn't solve the underlying problem of ensuring that container2 does not negatively affect container1. **Note:** The discussion shows some disagreement on the answer. One user initially thought the answer might be different, but then correctly concluded that option C is the solution based on understanding how Azure Container Instances handle resource limits. This reflects a common misunderstanding of how resource requests and limits interact within ACI.
148
[View Question](https://www.examtopics.com/discussions/databricks/view/146319-exam-az-104-topic-4-question-111-discussion/) You have an Azure subscription. You plan to deploy a container. You need to recommend which Azure services can scale the container automatically. What should you recommend? A. Azure Container Apps only B. Azure Container Instances only C. Azure Container Apps or Azure App Service only D. Azure Container Instances or Azure App Service only E. Azure Container Apps, Azure Container Instances, or Azure App Service
C. Azure Container Apps or Azure App Service only Azure Container Apps and Azure App Service both offer automatic scaling capabilities for containers. Azure Container Apps provide declarative scaling rules for horizontal scaling. Azure App Service supports automatic scaling for web apps, including containerized applications. Azure Container Instances, however, do *not* support automatic scaling; scaling must be manually managed. Therefore, options A, B, D, and E are incorrect because they either exclude one of the services that support autoscaling or incorrectly include Azure Container Instances. Note: The discussion shows overwhelming agreement on the correct answer, although there is a slight amount of back-and-forth in the reasoning.
149
[View Question](https://www.examtopics.com/discussions/databricks/view/146320-exam-az-104-topic-2-question-110-discussion/) You have a Microsoft Entra tenant named contoso.com. You collaborate with an external partner named fabrikam.com. You plan to invite users in fabrikam.com to the contoso.com tenant. You need to ensure that invitations can be sent only to fabrikam.com users. What should you do in the Microsoft Entra admin center? A. From Cross-tenant access settings, configure the Tenant restrictions settings. B. From Cross-tenant access settings, configure the Microsoft cloud settings. C. From External collaboration settings, configure the Guest user access restrictions settings. D. From External collaboration settings, configure the Collaboration restrictions settings.
The correct answer is **D. From External collaboration settings, configure the Collaboration restrictions settings.** The Collaboration restrictions settings in Microsoft Entra allow administrators to specify which external domains can receive invitations. By selecting the "Allow invitations only to the specified domains" option and adding fabrikam.com to the allowlist, you ensure that only users from that specific domain can be invited as guests to the contoso.com tenant. Why other options are incorrect: * **A. From Cross-tenant access settings, configure the Tenant restrictions settings:** Tenant restrictions manage access to resources *between* tenants, not the invitation process for guest users. It does not control which external domains can be invited. * **B. From Cross-tenant access settings, configure the Microsoft cloud settings:** This setting is unrelated to managing guest user invitations from external domains. * **C. From External collaboration settings, configure the Guest user access restrictions settings:** While related to external users, this setting likely deals with access controls *after* the user is invited, not the invitation process itself. The question focuses on *who* can be invited, not what they can do once invited. Note: The discussion shows unanimous agreement on answer D.
150
[View Question](https://www.examtopics.com/discussions/databricks/view/146321-exam-az-104-topic-2-question-114-discussion/) You have an Azure subscription that contains a storage account named storage. The storage account contains a blob that stores images. Client access to storage1 is granted by using a shared access signature (SAS). You need to ensure that users receive a warning message when they generate a SAS that exceeds a seven-day time period. What should you do for storage? A. Enable a read-only lock. B. Configure an alert rule. C. Add a lifecycle management rule. D. Set Allow recommended upper limit for shared access signature (SAS) expiry interval to Enabled.
D. Set Allow recommended upper limit for shared access signature (SAS) expiry interval to Enabled. This option allows Azure Storage to issue a warning when a user attempts to create a SAS with an expiry time exceeding the recommended limit (in this case, seven days). The warning is presented to the user directly during SAS creation. The user is still able to create the SAS despite the warning. Why other options are incorrect: * **A. Enable a read-only lock:** This would prevent any changes to the storage account, including the creation of SAS tokens, not just warn about exceeding a time limit. * **B. Configure an alert rule:** While an alert rule *could* be configured to monitor SAS creation and trigger an alert based on expiry time, this is indirect and doesn't directly warn the *user* generating the SAS. The question explicitly asks for a warning *message to the user*. * **C. Add a lifecycle management rule:** Lifecycle management rules are for automatically deleting or archiving blobs based on age or other criteria. They don't provide warnings during SAS creation. Note: There is some discussion in the provided text regarding whether an alert rule (option B) might be a suitable solution. However, the suggested answer and accompanying explanation strongly favor option D as the most direct and user-focused method for achieving the desired outcome.
151
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146324-exam-az-104-topic-4-question-108-discussion/) You have an Azure subscription that has the Azure container registries shown in the following table. | Container Registry | SKU | |---|---| | ContReg1 | Premium | | ContReg2 | Standard | | ContReg3 | Basic | You plan to use ACR Tasks and configure private endpoint connections. Which container registries support ACR Tasks and private endpoints? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** Only ContReg1 (Premium) supports both ACR Tasks and private endpoints. **Explanation:** The discussion and linked Microsoft documentation confirm that only the Premium tier of Azure Container Registry supports private endpoints. While Standard and Basic tiers support ACR Tasks, they lack private endpoint functionality. Therefore, only ContReg1, being a Premium tier registry, meets both requirements. **Why other options are incorrect:** * **ContReg2 (Standard) and ContReg3 (Basic):** While both support ACR Tasks, neither supports private endpoints, according to the provided information and consensus in the discussion. **Note:** There is a slight disagreement in the discussion regarding whether Basic and Standard tiers support ACR Tasks. However, the overwhelming consensus and the suggested answer indicate that they do.
152
[View Question](https://www.examtopics.com/discussions/databricks/view/146325-exam-az-104-topic-4-question-121-discussion/) **** You need to meet the technical requirements for the KEK. Which PowerShell cmdlet and key should you use? The following tables show existing Azure resources: **Storage Accounts:** | Name | Location | | -------- | -------- | | storage1 | East US | | storage2 | West US | **Virtual Machines:** | Name | Location | OS Disk Type | | ------ | -------- | ------------- | | VM1 | East US | Premium | | VM2 | West US | Standard | | VM3 | East US | Standard | | VM4 | West US | Premium | **Azure Container Registry Images:** | Name | Location | | ------- | -------- | | Image1 | West US | | Image2 | East US | **Azure Resources:** | Name | Type | Location | | -------- | ---------------- | -------- | | VNet1 | Virtual Network | East US | | Zone1 | Availability Zone | East US | | WebApp1 | Web App | East US | | share1 | File Share | West US | | cont1 | Container | East US | **Azure Key Vault (Vault1) Certificates:** | Name | | -------- | | Cert1 | **Azure Key Vault (Vault1) Keys:** | Name | Key Type | | ------ | -------- | | Key1 | RSA | | Key2 | RSA | **Microsoft Entra Users:** | Name | | ------- | | User1 | **Microsoft Entra Groups:** | Name | | -------- | | Group1 | A. Set-AzVMDiskEncryptionExtension and Key2 B. Set-AzDiskEncryptionKey and Key2 C. Set-AzDiskDiskEncryptionKey and Key1 D. Set-AzVMDiskEncryptionExtension and Key1 **
** D. `Set-AzVMDiskEncryptionExtension` and `Key1` The question asks for the PowerShell cmdlet and key to meet the technical requirement of using Azure Disk Encryption and a key encryption key (KEK) for virtual machines. The correct cmdlet for encrypting Azure VMs is `Set-AzVMDiskEncryptionExtension`. While the provided text doesn't explicitly state which key to use, the general consensus in the discussion points to `Key1` as the correct choice. The discussion highlights that `Set-AzDiskEncryptionKey` is not the appropriate cmdlet for encrypting entire VMs. Other options are incorrect because they use the wrong cmdlet or the wrong key. **Why other options are incorrect:** * **A and B:** Use `Set-AzDiskEncryptionKey`, which is incorrect; it's for managing encryption keys on disks, not encrypting entire VMs. * **C:** Uses an incorrect cmdlet (`Set-AzDiskDiskEncryptionKey` doesn't exist) and an irrelevant key. **Note:** The discussion shows some disagreement on the specific key to use, but the overwhelming consensus points towards option D as the correct answer based on the available information and the correct cmdlet for the task.
153
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146329-exam-az-104-topic-2-question-108-discussion/) You have a Microsoft Entra tenant that contains the users shown in the following table. ![Image](https://img.examtopics.com/az-104/image858.png) The tenant contains the groups shown in the following table. ![Image](https://img.examtopics.com/az-104/image859.png) Which users and groups can be deleted? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image860.png) **
** All users (User1, User2, User3, and User4) and all groups (Group A, Group B, and Group C) can be deleted. **Explanation:** The provided discussion and suggested answer indicate that all users can be deleted regardless of whether a license is assigned directly or inherited through group membership. There is no information provided to suggest any restrictions on deleting groups. **Why other options are incorrect:** There are no other options explicitly presented in the question. The question is a multiple-choice style hotspot question, implying that the answer involves selecting options from a given set which are not presented here. The provided answer is based on the suggested solution and the discussion which states this solution is correct.
154
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146332-exam-az-104-topic-4-question-117-discussion/) You need to configure WebApp1 to meet the technical requirements. Which certificate can you use from Vault1? The following tables show the certificates in Vault1: | Name | Key Size | Format | | -------- | -------- | ---------- | | Cert1 | 2048 | PKCS#12 | | Cert2 | 4096 | PKCS#12 | | Cert3 | 2048 | PEM | | Cert4 | 4096 | PEM | **
** B. Cert1 or Cert2 only **Explanation:** The technical requirements specify using TLS for WebApp1. Azure Web Apps support certificates in PKCS#12 format (PFX). Cert1 and Cert2 are both in PKCS#12 format and have valid key sizes, making them suitable for use with WebApp1. Cert3 and Cert4 are in PEM format, which is not directly supported by Azure App Service; they would need to be converted to PFX. **Why other options are incorrect:** * **A. Cert1 only:** This is too restrictive; Cert2 is also a valid option. * **C. Cert1 or Cert3 only:** Cert3 is in PEM format, which is not directly usable by Azure App Service. * **D. Cert3 or Cert4 only:** Both Cert3 and Cert4 are in PEM format and therefore unsuitable. * **E. Cert1, Cert2, Cert3, or Cert4:** This includes the PEM certificates (Cert3 and Cert4), which are not directly compatible without conversion. **Note:** The discussion shows some disagreement on the correct answer. While some initially selected E, the majority consensus and the most technically accurate answer points to B, considering the limitations of Azure Web App's direct certificate support. The conversion requirement for PEM certificates makes options A, C, D, and E less practical.
155
[View Question](https://www.examtopics.com/discussions/databricks/view/146337-exam-az-104-topic-4-question-113-discussion/) You have an Azure container registry named Registry1 that contains an image named image1. You receive an error message when you attempt to deploy a container instance by using image1. You need to be able to deploy a container instance by using image1. Solution: You assign the AcrPull role to ACR-Tasks-Network for Registry1. Does this meet the goal? A. Yes B. No
B. No The solution of assigning the `AcrPull` role to `ACR-Tasks-Network` for Registry1 does not meet the goal. The `AcrPull` role needs to be assigned to the *identity* that is deploying the container instance, not the network. `ACR-Tasks-Network` is a network resource and doesn't have the capability to pull images; it's the service principal or managed identity responsible for the deployment that needs the permission. Assigning the role to the wrong entity will result in the continued error. Why other options are incorrect: A. Yes - is incorrect because assigning the AcrPull role to ACR-Tasks-Network is not the correct way to solve the problem. The correct solution involves assigning the AcrPull role to the service principal or managed identity performing the deployment. Note: The discussion shows a general consensus that option B is correct.
156
[View Question](https://www.examtopics.com/discussions/databricks/view/146344-exam-az-104-topic-3-question-98-discussion/) **** You need to implement the planned changes for the storage account content. Which containers and file shares can you use to organize the content? A. share1 only B. cont1 and share1 only C. share1 and share2 only D. cont1, share1, and share2 only E. cont1, cont2, share1, and share2 **
** D. cont1, share1, and share2 only **Explanation:** The question asks which containers and file shares can be used to organize content. The provided text states that ADatum plans to "Whenever possible, use directories to organize storage account content." `cont1` is a container in `storage1`, which has a hierarchical namespace (allowing directory-like organization). `share1` and `share2` are file shares, which can also be used for organization, even without a hierarchical namespace. Therefore, all three (`cont1`, `share1`, and `share2`) can be used for content organization. `cont2` is newly created, but the question refers to *existing* containers and shares. **Why other options are incorrect:** * **A. share1 only:** This is incorrect because it ignores the organizational capabilities of `cont1`. * **B. cont1 and share1 only:** This is incorrect because it ignores `share2`, which is also available for organizing content. * **C. share1 and share2 only:** This is incorrect because it ignores `cont1`, which is specifically designed for hierarchical organization. * **E. cont1, cont2, share1, and share2:** This is incorrect because `cont2` is a newly created container and the question specifically asks about *existing* containers and file shares that can be used to organize content. **Note:** There is disagreement in the discussion regarding the correct answer. Some users argue for option E, including the newly created container `cont2`, while others correctly identify option D as the answer focusing on existing resources. The provided explanation aligns with the majority opinion and the core functionality described in the case study.
157
[View Question](https://www.examtopics.com/discussions/databricks/view/146347-exam-az-104-topic-2-question-104-discussion/) You have a Microsoft Entra tenant. You plan to perform a bulk import of users. You need to ensure that imported user objects are added automatically as the members of a specific group based on each user's department. The solution must minimize administrative effort. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Create groups that use the Assigned membership type. B. Create an Azure Resource Manager (ARM) template. C. Create groups that use the Dynamic User membership type. D. Write a PowerShell script that parses an import file. E. Create an XML file that contains user information and the appropriate attributes. F. Create a CSV file that contains user information and the appropriate attributes.
C and F The correct answer is C and F. The question requires automatic addition of users to groups based on their department, minimizing administrative effort. * **C. Create groups that use the Dynamic User membership type:** This is the core of the solution. Dynamic groups automatically add and remove members based on rules defined by attributes like "department". This directly fulfills the requirement for automatic membership management and minimizes administrative overhead. * **F. Create a CSV file that contains user information and the appropriate attributes:** A CSV file is a standard and easily parsed format for bulk user imports. This provides the data necessary for the user import process and the dynamic group membership rules to function. Why other options are incorrect: * **A. Create groups that use the Assigned membership type:** Assigned membership requires manual addition of users to groups, contradicting the requirement to minimize administrative effort. * **B. Create an Azure Resource Manager (ARM) template:** While ARM templates are useful for infrastructure automation, they are not directly relevant to the user import and group membership assignment process described here. They might be useful in a broader automation strategy, but not for solving the specific problem at hand. * **D. Write a PowerShell script that parses an import file:** While a PowerShell script *could* be used to automate the process, it's not strictly necessary. Option F (using a CSV file) provides the data, and Option C (using dynamic groups) handles the automatic membership assignment; a script adds extra complexity. * **E. Create an XML file that contains user information and the appropriate attributes:** While XML can store the data, CSV is more commonly used and simpler for this purpose. The question doesn't state a need for the extra complexity and overhead of XML processing. There is no disagreement reflected in the provided discussion. The provided solution analysis clearly supports the selection of options C and F.
158
[View Question](https://www.examtopics.com/discussions/databricks/view/146349-exam-az-104-topic-6-question-61-discussion/) You need to configure Azure Backup to meet the technical requirements for `cont1` and `share1`. To what should you set the backup frequency for each resource? NOTE: Each correct selection is worth one point. ![Image](https://img.examtopics.com/az-104/image882.png)
* **cont1 (Azure Blob Storage container): Daily** Azure Backup for blob storage containers supports a daily backup frequency as the most frequent option. * **share1 (Azure File Share): Every 4 hours** Azure Backup for file shares allows for a maximum of six backups per day, resulting in a minimum interval of every four hours. The discussion shows some disagreement on the exact frequencies, with some users referencing Microsoft documentation supporting only daily and weekly for blob storage. However, the provided suggested answer and multiple comments in the discussion point towards the above frequencies as correct. The provided answer reflects the consensus within the discussion, acknowledging the minor discrepancy.
159
**** [View Question](https://www.examtopics.com/discussions/databricks/view/146350-exam-az-104-topic-3-question-97-discussion/) You need to configure encryption for the virtual machines. The solution must meet the technical requirements. Which virtual machines can you encrypt? The following tables show existing Azure resources: **Storage Accounts:** | Name | Type | Location | | -------- | ---------------- | -------- | | storage1 | Standard_LRS | East US | | storage2 | Premium_LRS | West US | | storage3 | Standard_GRS | North EU | **Virtual Machines:** | Name | OS | Disk Type | Location | Size | | ------- | -------- | ------------- | -------- | -------- | | VM1 | Linux | Ephemeral OS | West US | Standard_D2_V2 | | VM2 | Windows | Basic | East US | Standard_D4_V2 | | VM3 | Windows | Premium_LRS | East US | Standard_D2_V2 | | VM4 | Windows | M-Series | North EU | Standard_E4_V3 | | VM5 | Windows | Dynamic Volume| West US | Standard_D2_V2 | **
** C. VM2 and VM3 Azure Disk Encryption has limitations. Based on the provided information, VM1 cannot be encrypted because it uses an ephemeral OS disk. VM4 and VM5 cannot be encrypted because VM4 uses an M-series disk with a Write Accelerator and VM5 uses a dynamic volume. Both of these configurations are not supported by Azure Disk Encryption. Therefore, only VM2 and VM3, which have supported disk types (Basic and Premium_LRS respectively), can be encrypted. **Why other options are incorrect:** * **A. VM1 and VM3:** VM1 uses an unsupported disk type (Ephemeral OS). * **B. VM4 and VM5:** VM4 uses an unsupported disk type (M-Series with Write Accelerator) and VM5 uses an unsupported disk type (Dynamic Volume). * **D. VM2 and VM4:** VM4 uses an unsupported disk type (M-Series with Write Accelerator). **Note:** The discussion shows some disagreement about whether Basic disks can be encrypted. The Microsoft documentation referenced in the discussion indicates that encrypting basic tier VMs or VMs created through the classic VM creation method is not supported. However, the consensus among the users leans towards option C being correct, which considers that VM2 can be encrypted. Further clarification from official documentation may be needed to resolve this discrepancy completely.
160
[View Question](https://www.examtopics.com/discussions/databricks/view/146377-exam-az-104-topic-5-question-149-discussion/) You have an Azure subscription that contains two peered virtual networks named VNet1 and VNet2. VNet1 has a VPN gateway that uses static routing. The on-premises network has a VPN connection that uses the VPN gateway of VNet1. You need to configure access for users on the on-premises network to connect to a virtual machine on VNet2. The solution must minimize costs. Which type of connectivity should you use? A. Azure Firewall with a private IP address B. service chaining and user-defined routes (UDRs) C. Azure Application Gateway D. ExpressRoute circuits to VNet2
B. service chaining and user-defined routes (UDRs) This is the most cost-effective solution because it leverages existing VPN connectivity to VNet1 and the peering between VNet1 and VNet2. By using user-defined routes (UDRs), you can direct traffic from the on-premises network (connected via VNet1) to VNet2 through the peering connection, avoiding the need for additional costly services like Azure Firewall or ExpressRoute. Service chaining allows the traffic to flow through the VPN connection and then to the peered VNet. Why other options are incorrect: * **A. Azure Firewall with a private IP address:** While Azure Firewall can provide security, it adds unnecessary cost and complexity. The existing VPN connection and peering already provide a path for connectivity. * **C. Azure Application Gateway:** Azure Application Gateway is a load balancer and web application firewall, not suitable for this scenario which requires simple network connectivity between the on-premises network and a VM in a peered VNet. * **D. ExpressRoute circuits to VNet2:** ExpressRoute is a dedicated connection to Azure, significantly more expensive than using the existing VPN and VNet peering. This would be a redundant and costly solution. Note: The provided discussion only supports option B as the correct answer. There is no other perspective or conflicting opinions presented.
161
[View Question](https://www.examtopics.com/discussions/databricks/view/146396-exam-az-104-topic-4-question-107-discussion/) You have an Azure subscription. You plan to create an Azure container registry named ContReg1. You need to ensure that you can push and pull signed images for ContReg1. What should you do for ContReg1? A. Enable encryption by using a customer-managed key. B. Create a connected registry. C. Add a token. D. Enable content trust.
The correct answer is **D. Enable content trust.** Enabling content trust in Azure Container Registry allows you to sign images, ensuring their integrity and authenticity when pushed and pulled. This directly addresses the requirement to push and pull *signed* images. Docker Content Trust (DCT) is used to achieve this. Why other options are incorrect: * **A. Enable encryption by using a customer-managed key:** Encryption protects the confidentiality of your container images but doesn't guarantee their authenticity or integrity, which is the core requirement. * **B. Create a connected registry:** Connected registries are about replication and improved availability but don't directly deal with image signing. * **C. Add a token:** Tokens handle authentication for access, not the signing and verification of image integrity. Note: The discussion highlights that Content Trust is only supported in the Azure Container Registry Premium Tier. This information is crucial for a complete understanding of the solution.
162
[View Question](https://www.examtopics.com/discussions/databricks/view/146397-exam-az-104-topic-4-question-118-discussion/) You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the resource group blade, move VM1 to another subscription. Does this meet the goal? A. Yes B. No
B. No Moving VM1 to another subscription does not immediately move it to a different host. While it changes the resource location, it doesn't guarantee an immediate host change to avoid the scheduled maintenance. The suggested solutions in the discussion, such as redeploying the VM using the ARM template or using the "Redeploy + Reapply" option, are more appropriate for immediately moving the VM to a different host. Moving to a different subscription is a separate action and could involve additional downtime while the VM is provisioned in the new subscription. Why other options are incorrect: Option A is incorrect because moving a VM to a different subscription is not a solution for immediately relocating a VM to avoid maintenance. The users in the discussion clearly indicate that redeploying is the correct approach.
163
[View Question](https://www.examtopics.com/discussions/databricks/view/146478-exam-az-104-topic-4-question-116-discussion/) You have a Standard Azure App Service plan named Plan1. You need to ensure that Plan1 will scale automatically when the CPU usage of the web app exceeds 80 percent. What should you select for Plan1? A. Automatic in the Scale out method settings B. Rules Based in the Scale out method settings C. Premium P1 in the Scale up (App Service plan) settings D. Standard S1 in the Scale up (App Service plan) settings E. Manual in the Scale out method settings
The correct answer is B. Rules Based scaling in the Scale out method settings allows you to define rules based on metrics (like CPU usage) to automatically scale your App Service plan. When CPU usage exceeds 80%, the defined rule will trigger an autoscale action, adding resources (scaling out) to handle the increased load. Options A, D, and E are incorrect because: * **A. Automatic:** While "Automatic" scaling exists, it lacks the precision to specify the exact CPU threshold (80%). It might scale based on other factors or default thresholds. * **C & D. Premium P1/ Standard S1:** These options refer to scaling *up* the App Service plan, changing its size (e.g., from a smaller to a larger instance). The question requires *out* scaling (adding instances to handle more load), not scaling up a single instance. * **E. Manual:** Manual scaling requires intervention and doesn't automatically respond to CPU usage exceeding a threshold. Note: The discussion shows unanimous agreement on option B as the correct answer.
164
[View Question](https://www.examtopics.com/discussions/databricks/view/146646-exam-az-104-topic-2-question-103-discussion/) Your on-premises network contains a VPN gateway. You have an Azure subscription that contains the resources shown in the following table. | Resource | Type | Location | |--------------|----------------------|-------------| | VM1 | Virtual Machine | West US | | Storage1 | Storage Account | West US | | VNET1 | Virtual Network | West US | | Subnet1 | Subnet | West US | | VPN Gateway1 | VPN Gateway | West US | You need to ensure that all the traffic from VM1 to storage1 travels across the Microsoft backbone network. What should you configure? A. a network security group (NSG) B. private endpoints C. Microsoft Entra Application Proxy D. Azure Virtual WAN
B. Private endpoints To ensure that all traffic from VM1 to Storage1 travels across the Microsoft backbone network, you should configure private endpoints. Private endpoints provide private connectivity to Azure services, such as Storage Accounts, without exposing the service to the public internet. Traffic between VM1 and Storage1 using a private endpoint will remain within the Microsoft network. Why other options are incorrect: * **A. a network security group (NSG):** NSGs control network traffic flow based on rules but don't guarantee that traffic will travel over the Microsoft backbone. The traffic could still potentially transit the public internet depending on other network configurations. * **C. Microsoft Entra Application Proxy:** This is used for providing secure access to on-premises applications, not for controlling traffic between Azure resources. * **D. Azure Virtual WAN:** This is a networking service that provides connectivity between on-premises and Azure resources, however it's not strictly required in this case, and a private endpoint is a more direct and appropriate solution for ensuring traffic stays on the Microsoft backbone network between two Azure resources in the same region. Note: The discussion shows unanimous agreement on the correct answer, B.
165
[View Question](https://www.examtopics.com/discussions/databricks/view/146740-exam-az-104-topic-4-question-114-discussion/) You have an Azure container registry named Registry1 that contains an image named image1. You receive an error message when you attempt to deploy a container instance by using image1. You need to be able to deploy a container instance by using image1. Solution: You select Use dedicated data endpoint for Registry1. Does this meet the goal? A. Yes B. No
B. No Selecting "Use dedicated data endpoint" for Registry1 does not solve the problem of deploying a container instance using image1. The error likely stems from issues like incorrect credentials or firewall restrictions preventing access to the image in Registry1. The solution focuses on network configuration, while the root cause is likely authentication or authorization. The suggested solutions in the discussion, such as verifying credentials and enabling "Allow trusted services" or using a managed identity, directly address these potential root causes. Why other options are incorrect: A. Yes is incorrect. The discussion clearly indicates that using a dedicated data endpoint is not the solution to the problem of deploying the container image. The error message is likely caused by access issues, not network connectivity. Note: There is a consensus among the users in the discussion that option B is the correct answer. The provided "solution" of using a dedicated data endpoint is deemed insufficient to resolve the underlying issue.
166
[View Question](https://www.examtopics.com/discussions/databricks/view/146741-exam-az-104-topic-4-question-115-discussion/) You have an Azure container registry named Registry1 that contains an image named image1. You receive an error message when you attempt to deploy a container instance by using image1. You need to be able to deploy a container instance by using image1. Solution: You create a private endpoint connection for Registry1. Does this meet the goal? A. Yes B. No
B. No Creating a private endpoint connection for Registry1 alone will not necessarily resolve the deployment issue. The error message could stem from various problems unrelated to network connectivity, such as authentication failures, incorrect image names or tags, firewall restrictions, or DNS resolution problems. Simply creating a private endpoint addresses only network connectivity; it doesn't solve potential authentication or image-related issues. Therefore, while a private endpoint might be *part* of a solution, it's not a guaranteed fix. Why other options are incorrect: A. Yes - Incorrect. This option is incorrect because creating a private endpoint connection only addresses one potential cause of the deployment failure. The error could be due to many other factors, as described above. The question does not state this is the only problem; other problems could be occurring. Note: The discussion shows a consensus that option B is correct. However, there is no definitive explanation of the root cause(s) of the original error message, nor a comprehensive list of steps to fix it.
167
[View Question](https://www.examtopics.com/discussions/databricks/view/147399-exam-az-104-topic-4-question-119-discussion/) You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the VM1 Redeploy + reapply blade, you select Redeploy. Does this meet the goal? A. Yes B. No
B. No Explanation: While redeploying a VM moves it to a new host, it's not immediate. The process involves shutting down the VM, moving it, and then restarting it. This is not an immediate move as requested by the problem statement. The discussion highlights a disagreement: some users selected A, believing redeployment fulfilled the requirement, while others (including dilopezat) correctly identified that it was not an immediate solution. The delay inherent in the shutdown, move, and restart process renders "redeploy" unsuitable for an immediate solution. Why other options are incorrect: A. Yes is incorrect because redeploying a VM is not an immediate action. There's a period of downtime involved in the process. The question requires an immediate solution.
168
[View Question](https://www.examtopics.com/discussions/databricks/view/147400-exam-az-104-topic-4-question-120-discussion/) You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the VM1 Updates blade, select One-time update. Does this meet the goal? A. Yes B. No
B. No Selecting "One-time update" from the VM1 Updates blade does not immediately move the VM to a different host. A one-time update is for applying updates, not for immediate relocation due to maintenance. To immediately move the VM, redeployment is the correct solution, as suggested in the discussion. The provided solution in the question is therefore incorrect. WHY OTHER OPTIONS ARE INCORRECT: A. Yes is incorrect because selecting "One-time update" only applies updates and doesn't guarantee immediate relocation to a different host to avoid the maintenance. The discussion clearly points to redeployment as the appropriate action for immediate relocation.
169
**** [View Question](https://www.examtopics.com/discussions/databricks/view/25017-exam-az-104-topic-5-question-69-discussion/) You have an Azure subscription that contains a policy-based virtual network gateway named GW1 and a virtual network named VNet1. You need to ensure that you can configure a point-to-site connection from an on-premises computer to VNet1. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Add a service endpoint to VNet1 B. Reset GW1 C. Create a route-based virtual network gateway D. Add a connection to GW1 E. Delete GW1 F. Add a public IP address space to VNet1 **
** The correct actions are E and C: Delete GW1 and then Create a route-based virtual network gateway. Point-to-site VPN connections in Azure are not supported by policy-based gateways. Therefore, the existing policy-based gateway (GW1) must be deleted (E). A new, route-based virtual network gateway must then be created (C) to enable the point-to-site connection. **Why other options are incorrect:** * **A. Add a service endpoint to VNet1:** Service endpoints are for securing access to Azure services, not for establishing VPN connections. * **B. Reset GW1:** Resetting the gateway does not change its type; it remains a policy-based gateway, which doesn't support point-to-site VPNs. * **D. Add a connection to GW1:** You cannot add a point-to-site connection to a policy-based gateway. * **F. Add a public IP address space to VNet1:** While a public IP address is necessary for the gateway, this is not the primary issue. The core problem is the gateway type itself. **Note:** While the suggested answer indicates "CE", the discussion shows a consensus that the correct order of operations is to first delete the existing policy-based gateway (E) and then create a route-based gateway (C).
170
[View Question](https://www.examtopics.com/discussions/databricks/view/25359-exam-az-104-topic-3-question-45-discussion/) You have an on-premises server that contains a folder named D:\Folder1. You need to copy the contents of D:\Folder1 to the public container in an Azure Storage account named contosodata. Which command should you run? A. https://contosodata.blob.core.windows.net/public B. azcopy sync D:\folder1 https://contosodata.blob.core.windows.net/public --snapshot C. azcopy copy D:\folder1 https://contosodata.blob.core.windows.net/public --recursive D. az storage blob copy start-batch D:\Folder1 https://contosodata.blob.core.windows.net/public
C. `azcopy copy D:\folder1 https://contosodata.blob.core.windows.net/public --recursive` The `azcopy copy` command with the `--recursive` flag is the correct choice for copying the contents of a local directory (D:\Folder1) to an Azure Blob storage container. The `--recursive` flag ensures that all files and subfolders within D:\Folder1 are copied. A is incorrect because it only provides the URL of the Azure Blob storage container; it's not a command to copy files. B is incorrect because `azcopy sync` synchronizes, meaning it only copies files if they don't already exist or are newer in the source. D is incorrect because `az storage blob copy start-batch` is used for copying multiple blobs, not an entire directory structure. It's also designed to work with blobs already existing in Azure storage.
171
[View Question](https://www.examtopics.com/discussions/databricks/view/25723-exam-az-104-topic-5-question-60-discussion/) You have an Azure subscription that contains 10 virtual networks. The virtual networks are hosted in separate resource groups. Another administrator plans to create several network security groups (NSGs) in the subscription. You need to ensure that when an NSG is created, it automatically blocks TCP port 8080 between the virtual networks. Solution: You assign a built-in policy definition to the subscription. Does this meet the goal? A. Yes B. No
B. No Explanation: The discussion indicates that there is no built-in policy definition that automatically blocks TCP port 8080 between virtual networks. Therefore, assigning a built-in policy definition would not meet the goal. A custom policy would be required. Several commenters explicitly state this. While one commenter suggests that built-in policies exist, this is contradicted by the overwhelming consensus in the discussion. Why other options are incorrect: A. Yes - This is incorrect because the discussion clearly establishes that a built-in policy to achieve this specific requirement does not exist. The suggestion to use a built-in policy is therefore flawed.
172
**** [View Question](https://www.examtopics.com/discussions/databricks/view/25995-exam-az-104-topic-2-question-9-discussion/) You recently created a new Azure subscription that contains a user named Admin1. Admin1 attempts to deploy an Azure Marketplace resource by using an Azure Resource Manager template. Admin1 deploys the template by using Azure PowerShell and receives the following error message: `User failed validation to purchase resources. Error message: `Legal terms have not been accepted for this item on this subscription. To accept legal terms, please go to the Azure portal (http://go.microsoft.com/fwlink/?LinkId=534873) and configure programmatic deployment for the Marketplace item or create it there for the first time.` You need to ensure that Admin1 can deploy the Marketplace resource successfully. What should you do? A. From Azure PowerShell, run the Set-AzApiManagementSubscription cmdlet B. From the Azure portal, register the Microsoft.Marketplace resource provider C. From Azure PowerShell, run the Set-AzMarketplaceTerms cmdlet D. From the Azure portal, assign the Billing administrator role to Admin1 **
** C. From Azure PowerShell, run the Set-AzMarketplaceTerms cmdlet. The error message explicitly states that "Legal terms have not been accepted for this item on this subscription". The `Set-AzMarketplaceTerms` cmdlet allows for the acceptance of these legal terms programmatically, directly addressing the root cause of the deployment failure. **Why other options are incorrect:** * **A. From Azure PowerShell, run the Set-AzApiManagementSubscription cmdlet:** This cmdlet manages API Management subscriptions, which is unrelated to accepting Marketplace terms. * **B. From the Azure portal, register the Microsoft.Marketplace resource provider:** While resource provider registration is sometimes necessary for deploying Azure resources, the error message clearly indicates the problem is the unaccepted legal terms, not a registration issue. * **D. From the Azure portal, assign the Billing administrator role to Admin1:** Assigning billing roles doesn't grant permission to accept legal agreements for Marketplace items; it deals with billing aspects, not legal compliance. There is a consensus among the discussion participants that option C is the correct answer.
173
[View Question](https://www.examtopics.com/discussions/databricks/view/26088-exam-az-104-topic-2-question-10-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains 5,000 user accounts. You create a new user account named AdminUser1. You need to assign the User administrator administrative role to AdminUser1. What should you do from the user account properties? A. From the Licenses blade, assign a new license B. From the Directory role blade, modify the directory role C. From the Groups blade, invite the user account to a new group
The correct answer is **B. From the Directory role blade, modify the directory role.** To assign the User administrator role to AdminUser1, you need to navigate to the Azure Active Directory section, then find the Roles and Administrators section, and assign the User administrator role to the specific user account. Several users in the discussion confirmed this process through different navigation paths within the Azure portal, which slightly vary based on the version or configuration. This directly addresses the requirement of assigning an administrative role. Why other options are incorrect: * **A. From the Licenses blade, assign a new license:** This blade is used for assigning service licenses (like Microsoft 365 E5), not administrative roles within Azure AD. Administrative roles are managed separately. * **C. From the Groups blade, invite the user account to a new group:** While you can assign roles to groups, simply adding a user to a group does not automatically grant them administrative privileges unless the group itself has been specifically assigned the appropriate role. This question specifies assigning the role directly to the user. Note: The discussion shows a general agreement on the correct answer, although the exact steps to reach the "Directory role blade" are described slightly differently by various users, suggesting variations in Azure portal navigation depending on the version or configuration.
174
[View Question](https://www.examtopics.com/discussions/databricks/view/26628-exam-az-104-topic-2-question-11-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.onmicrosoft.com that contains 100 user accounts. You purchase 10 Azure AD Premium P2 licenses for the tenant. You need to ensure that 10 users can use all the Azure AD Premium features. What should you do? A. From the Licenses blade of Azure AD, assign a license B. From the Groups blade of each user, invite the users to a group C. From the Azure AD domain, add an enterprise application D. From the Directory role blade of each user, modify the directory role
The correct answer is **A. From the Licenses blade of Azure AD, assign a license.** To enable 10 users to access Azure AD Premium P2 features, you must directly assign the licenses to their accounts from the Azure AD Licenses blade. This directly grants them the necessary permissions and access rights included in the Premium P2 license. Why other options are incorrect: * **B. From the Groups blade of each user, invite the users to a group:** While you *can* assign licenses to groups, this option is incorrect because simply adding users to a group does not automatically grant them licenses. The group would need to be a license-assignment group with the Premium P2 licenses already assigned to it. The question doesn't state this is the case. * **C. From the Azure AD domain, add an enterprise application:** Adding an enterprise application is unrelated to licensing. Enterprise applications are for integrating SaaS applications with Azure AD, not for assigning user licenses. * **D. From the Directory role blade of each user, modify the directory role:** Assigning directory roles (like Global Administrator) grants administrative privileges within Azure AD, not access to the features included in the Premium P2 license. Note: While there is a discussion about needing to add the user's location (sreekan's comment), this is not directly addressed in the original question and isn't a requirement to enable Premium P2 features. The core issue is assigning the license itself.
175
[View Question](https://www.examtopics.com/discussions/databricks/view/26670-exam-az-104-topic-5-question-98-discussion/) Your on-premises network contains an SMB share named Share1. You have an Azure subscription that contains the following resources: ✑ A web app named webapp1 ✑ A virtual network named VNET1 You need to ensure that webapp1 can connect to Share1. What should you deploy? A. an Azure Application Gateway B. an Azure Active Directory (Azure AD) Application Proxy C. an Azure Virtual Network Gateway
C. An Azure Virtual Network Gateway. An Azure Virtual Network Gateway creates a secure connection between your on-premises network and your Azure virtual network (VNET1). This allows webapp1, residing within VNET1, to access resources like Share1 on your on-premises network. The connection is typically established using a Site-to-Site VPN, which uses IPsec/IKE protocols to create a secure tunnel. A is incorrect: An Azure Application Gateway is a reverse proxy and load balancer for web traffic (HTTP, HTTPS, WebSockets), not SMB file sharing. B is incorrect: Azure AD Application Proxy allows secure remote access to on-premises web applications, again not SMB file shares. It focuses on web applications accessible through a browser, not direct file access. Note: The provided discussion agrees on the correct answer.
176
[View Question](https://www.examtopics.com/discussions/databricks/view/26765-exam-az-104-topic-6-question-20-discussion/) You have an Azure web app named webapp1. Users report that they often experience HTTP 500 errors when they connect to webapp1. You need to provide the developers of webapp1 with real-time access to the connection errors. The solution must provide all the connection error details. What should you do first? A. From webapp1, enable Web server logging B. From Azure Monitor, create a workbook C. From Azure Monitor, create a Service Health alert D. From webapp1, turn on Application Logging
A. From webapp1, enable Web server logging Explanation: HTTP 500 errors indicate server-side issues, not application-level problems. Web server logging captures raw HTTP request data, including details of the 500 errors. This provides the most direct and immediate access to the connection error details required by the developers. The logs can be accessed in real-time through the "Log stream" pane. Options B, C, and D are less suitable because: * **B. From Azure Monitor, create a workbook:** Workbooks are for creating customized dashboards, not for directly capturing real-time connection errors. * **C. From Azure Monitor, create a Service Health alert:** Service Health alerts monitor the overall health of the Azure service, not specific connection errors within a web app. * **D. From webapp1, turn on Application Logging:** Application logging captures errors *within* the application itself, not connection errors that occur before the request even reaches the application. Note: The provided discussion indicates a strong consensus on answer A as the correct approach to access real-time connection error details.
177
**** [View Question](https://www.examtopics.com/discussions/databricks/view/26982-exam-az-104-topic-4-question-34-discussion/) You have an Azure subscription named Subscription1. Subscription1 contains a resource group named RG1. RG1 contains resources that were deployed by using templates. You need to view the date and time when the resources were created in RG1. Solution: From the Subscriptions blade, you select the subscription, and then click Programmatic deployment. Does this meet the goal? A. Yes B. No **
** B. No The suggested solution, "From the Subscriptions blade, you select the subscription, and then click Programmatic deployment," does not meet the goal of viewing the date and time when resources were created in RG1. The correct method, as confirmed by multiple users in the discussion, is to navigate to the RG1 blade and then click "Deployments". This displays a history of deployments for that resource group, including timestamps. **Why other options are incorrect:** Option A ("Yes") is incorrect because the "Programmatic deployment" option within the subscription blade does not provide the specific creation timestamps of resources within a resource group. It's related to deployment *processes*, not individual resource creation times. The discussion shows overwhelming agreement on the correct answer (B).
178
**** [View Question](https://www.examtopics.com/discussions/databricks/view/28102-exam-az-104-topic-2-question-12-discussion/) You have an Azure subscription named Subscription1 and an on-premises deployment of Microsoft System Center Service Manager. Subscription1 contains a virtual machine named VM1. You need to ensure that an alert is set in Service Manager when the amount of available memory on VM1 is below 10 percent. What should you do first? A. Create an automation runbook B. Deploy a function app C. Deploy the IT Service Management Connector (ITSM) D. Create a notification **
** C. Deploy the IT Service Management Connector (ITSM). The IT Service Management Connector (ITSMC) establishes a bridge between Azure and on-premises ITSM systems like System Center Service Manager. Before you can send alerts from Azure VM1's low memory situation to Service Manager, you must first establish this connection. Options A and B are possible solutions, but they require the pre-existing connection provided by the ITSM Connector. Option D is incorrect because creating a notification within Azure won't automatically transfer that notification to Service Manager without the connector. **Why other options are incorrect:** * **A. Create an automation runbook:** While you could create a runbook to monitor VM1's memory and trigger an alert, this is not the *first* step. The runbook needs a way to communicate with Service Manager, which is provided by the ITSM Connector. * **B. Deploy a function app:** Similar to A, a function app could monitor VM1 and send alerts, but it also requires a mechanism to send data to Service Manager. The ITSM Connector provides this mechanism. * **D. Create a notification:** This only creates an alert within Azure. It doesn't automatically forward the alert to Service Manager. The ITSM Connector is needed to achieve this integration. There is a general consensus among the discussion participants that C is the correct answer. However, some discussion points highlight that A and B are technically feasible, but only after the initial connection is established by deploying the ITSM connector.
179
[View Question](https://www.examtopics.com/discussions/databricks/view/28103-exam-az-104-topic-2-question-13-discussion/) You sign up for Azure Active Directory (Azure AD) Premium P2. You need to add a user named [email protected] as an administrator on all the computers that will be joined to the Azure AD domain. What should you configure in Azure AD? A. Device settings from the Devices blade B. Providers from the MFA Server blade C. User settings from the Users blade D. General settings from the Groups blade
A. Device settings from the Devices blade. To grant a user administrative privileges on all Azure AD-joined computers, you need to configure device settings. This allows for centralized management of device-level policies and permissions, including assigning administrative roles. Why other options are incorrect: * **B. Providers from the MFA Server blade:** This relates to multi-factor authentication configuration, not device administration. * **C. User settings from the Users blade:** This manages user account attributes like passwords and group memberships, but not device-specific permissions. * **D. General settings from the Groups blade:** Managing groups allows assigning users to roles and permissions, but not direct control over device-level administration. While users are often part of administrative groups, the actual management of *device* administration happens at the device level. Note: The provided discussion only indicates that option A is the correct answer without further explanation. There is no conflicting opinion presented.
180
[View Question](https://www.examtopics.com/discussions/databricks/view/28113-exam-az-104-topic-4-question-26-discussion/) You have a deployment template named Template1 that is used to deploy 10 Azure web apps. You need to identify what to deploy before you deploy Template1. The solution must minimize Azure costs. What should you identify? A. five Azure Application Gateways B. one App Service plan C. 10 App Service plans D. one Azure Traffic Manager E. one Azure Application Gateway
B. one App Service plan Explanation: To minimize Azure costs when deploying 10 Azure web apps, you should create only one App Service Plan. A single App Service Plan can host multiple web apps. Creating 10 separate App Service Plans would be significantly more expensive and unnecessary. The other options (Application Gateways and Traffic Manager) are load balancing and traffic management solutions, which are not required to deploy the web apps themselves and would add unnecessary costs if deployed without justification. Why other options are incorrect: * **A. five Azure Application Gateways:** Unnecessary and costly. Application Gateways are used for load balancing and are not required for the deployment of the web apps themselves. * **C. 10 App Service plans:** This is significantly more expensive than using a single App Service plan. * **D. one Azure Traffic Manager:** While useful for load balancing across regions, it's not necessary for simply deploying the 10 web apps. It adds unnecessary cost in this scenario. * **E. one Azure Application Gateway:** Similar to option A, this is unnecessary and increases costs. Note: The provided discussion shows a consensus on the correct answer. There is no visible disagreement among users.
181
[View Question](https://www.examtopics.com/discussions/databricks/view/28114-exam-az-104-topic-4-question-28-discussion/) You have an Azure subscription that contains a virtual machine named VM1. VM1 hosts a line-of-business application that is available 24 hours a day. VM1 has one network interface and one managed disk. VM1 uses the D4s v3 size. You plan to make the following changes to VM1: ✑ Change the size to D8s v3. ✑ Add a 500-GB managed disk. ✑ Add the Puppet Agent extension. ✑ Enable Desired State Configuration Management. Which change will cause downtime for VM1? A. Enable Desired State Configuration Management B. Add a 500-GB managed disk C. Change the size to D8s v3 D. Add the Puppet Agent extension
C. Change the size to D8s v3 Explanation: Resizing a virtual machine in Azure requires the VM to be stopped. Therefore, changing the size from D4s v3 to D8s v3 will cause downtime for VM1. Adding a disk, adding extensions, or enabling DSC can generally be done while the VM is running. Why other options are incorrect: * **A. Enable Desired State Configuration Management:** DSC can usually be enabled without stopping the VM. * **B. Add a 500-GB managed disk:** Attaching a new managed disk to a running VM generally doesn't require a shutdown. * **D. Add the Puppet Agent extension:** Adding extensions, like the Puppet Agent, typically doesn't require a VM restart. Note: The provided answer is based on the discussion and general Azure best practices. There might be specific configurations where some of the other options could cause brief downtime.
182
[View Question](https://www.examtopics.com/discussions/databricks/view/28116-exam-az-104-topic-5-question-55-discussion/) You have an Azure subscription that contains 10 virtual networks. The virtual networks are hosted in separate resource groups. Another administrator plans to create several network security groups (NSGs) in the subscription. You need to ensure that when an NSG is created, it automatically blocks TCP port 8080 between the virtual networks. Solution: From the Resource providers blade, you unregister the Microsoft.ClassicNetwork provider. Does this meet the goal? A. Yes B. No
B. No Unregistering the Microsoft.ClassicNetwork provider will not automatically block TCP port 8080 between virtual networks when new NSGs are created. This provider is related to classic deployments and doesn't directly control the creation or configuration of NSGs within resource groups. To achieve the goal of automatically blocking TCP port 8080, a custom Azure Policy definition is required. This policy can enforce the desired NSG rule during the NSG creation process. The discussion shows a consensus that unregistering the provider does not meet the goal and that a custom policy is necessary. This is supported by the provided references to Microsoft documentation on Azure Policy.
183
**** [View Question](https://www.examtopics.com/discussions/databricks/view/28207-exam-az-104-topic-4-question-31-discussion/) You need to deploy an Azure virtual machine scale set that contains five instances as quickly as possible. What should you do? A. Deploy five virtual machines. Modify the Availability Zones settings for each virtual machine. B. Deploy five virtual machines. Modify the Size setting for each virtual machine. C. Deploy one virtual machine scale set that is set to VM (virtual machines) orchestration mode. D. Deploy one virtual machine scale set that is set to ScaleSetVM orchestration mode. **
** D. Deploy one virtual machine scale set that is set to ScaleSetVM orchestration mode. **Explanation:** The goal is to deploy five VMs as quickly as possible. Using a Virtual Machine Scale Set (VMSS) with ScaleSetVM orchestration mode is the fastest approach. This mode allows Azure to automatically manage the creation, update, and deletion of the five instances based on a single configuration, eliminating the manual steps involved in deploying individual VMs (options A and B). Option C, using VM orchestration mode (now called flexible orchestration mode), requires manually adding VMs to the scale set, which is significantly slower. ScaleSetVM orchestration mode (now called uniform orchestration mode) provides the most efficient and rapid deployment. **Why other options are incorrect:** * **A:** Manually deploying five VMs and configuring Availability Zones individually is time-consuming and inefficient. * **B:** Similar to A, manually deploying and configuring VM sizes is slow and doesn't leverage the automation benefits of VMSS. * **C:** VM orchestration mode (flexible orchestration mode) requires manual addition of VMs to the scale set, negating the speed advantage of using a VMSS. **Note:** The discussion highlights that the terminology for orchestration modes has changed. While the question uses older terminology, the answer reflects the current naming convention (uniform orchestration mode for option D and flexible orchestration mode for option C).
184
[View Question](https://www.examtopics.com/discussions/databricks/view/28480-exam-az-104-topic-5-question-35-discussion/) You have an Azure subscription that contains 10 virtual networks. The virtual networks are hosted in separate resource groups. Another administrator plans to create several network security groups (NSGs) in the subscription. You need to ensure that when an NSG is created, it automatically blocks TCP port 8080 between the virtual networks. Solution: You create a resource lock, and then you assign the lock to the subscription. Does this meet the goal? A. Yes B. No
B. No Creating a resource lock and assigning it to the subscription will not automatically block TCP port 8080 between virtual networks when a new NSG is created. Resource locks control the ability to delete or modify resources, not their configuration. To achieve the goal of automatically blocking TCP port 8080, a custom Azure Policy definition should be created and applied to the subscription. This policy would enforce the required network security rule on all NSGs created within the subscription. When a new NSG is created, the policy will automatically apply the rule, blocking the specified port. Why other options are incorrect: A. Yes - Incorrect. A resource lock doesn't control the configuration of NSGs; it controls the ability to modify or delete them. It wouldn't enforce a specific network rule. The discussion shows unanimous agreement that option B is correct and that a resource lock is not the appropriate solution. A custom Azure policy is the correct approach.
185
[View Question](https://www.examtopics.com/discussions/databricks/view/28532-exam-az-104-topic-5-question-2-discussion/) Your company has three offices. The offices are located in Miami, Los Angeles, and New York. Each office contains a datacenter. You have an Azure subscription that contains resources in the East US and West US Azure regions. Each region contains a virtual network. The virtual networks are peered. You need to connect the datacenters to the subscription. The solution must minimize network latency between the datacenters. What should you create? A. three Azure Application Gateways and one On-premises data gateway B. three virtual hubs and one virtual WAN C. three virtual WANs and one virtual hub D. three On-premises data gateways and one Azure Application Gateway
The best answer is **B. three virtual hubs and one virtual WAN**. While there's disagreement in the discussion regarding the number of virtual hubs per region, the core concept remains that a Virtual WAN provides the best solution for connecting multiple on-premises locations (the three offices) to Azure resources across multiple regions (East US and West US) with minimal latency. A Virtual WAN allows you to connect multiple virtual hubs, and each hub can be deployed in a different region (or even the same region). Connecting each datacenter to its own virtual hub in the geographically closest Azure region will minimize latency. Then, connecting those Virtual Hubs under a single Virtual WAN allows for centralized management and routing. Why other options are incorrect: * **A. three Azure Application Gateways and one On-premises data gateway:** Application Gateways are for managing application traffic, not directly connecting on-premises networks to Azure. On-premises data gateways are for data integration, not wide-area networking. * **C. three virtual WANs and one virtual hub:** This is inefficient and complex. Having multiple WANs defeats the purpose of centralizing management and routing. A single WAN with multiple hubs is the better approach. * **D. three On-premises data gateways and one Azure Application Gateway:** Same as option A; these are not designed for connecting on-premises networks across Azure regions. **Note:** The discussion shows a conflicting opinion on whether multiple virtual hubs can exist within the same Azure region. The answer provided above assumes that multiple virtual hubs *can* exist in the same region, which aligns with more recent documentation. However, if the limitation of one hub per region were strictly enforced, the optimal solution might involve a more complex setup potentially involving VPN gateways or ExpressRoute in a less efficient manner.
186
**** [View Question](https://www.examtopics.com/discussions/databricks/view/28893-exam-az-104-topic-5-question-9-discussion/) You have an Azure subscription named Subscription1 that contains two Azure virtual networks named VNet1 and VNet2. VNet1 contains a VPN gateway named VPNGW1 that uses static routing. There is a site-to-site VPN connection between your on-premises network and VNet1. On a computer named Client1 that runs Windows 10, you configure a point-to-site VPN connection to VNet1. You configure virtual network peering between VNet1 and VNet2. You verify that you can connect to VNet2 from the on-premises network. Client1 is unable to connect to VNet2. You need to ensure that you can connect Client1 to VNet2. What should you do? A. Download and re-install the VPN client configuration package on Client1. B. Select Allow gateway transit on VNet1. C. Select Allow gateway transit on VNet2. D. Enable BGP on VPNGW1 **
** A. Download and re-install the VPN client configuration package on Client1. The discussion strongly suggests that after a change to the network topology (adding the VNet peering between VNet1 and VNet2), the Point-to-Site (P2S) VPN client configuration on Client1 needs to be redownloaded and reinstalled to reflect the updated network configuration. The on-premises network can reach VNet2, indicating the peering is correctly configured. The issue lies with Client1's outdated VPN client configuration. **Why other options are incorrect:** * **B. Select Allow gateway transit on VNet1:** This option is relevant for allowing traffic to transit through VNet1 to other networks, but it doesn't directly address the client's inability to connect after a topological change. The on-premises connection is already working, so this isn't the primary problem. * **C. Select Allow gateway transit on VNet2:** Similar to option B, this wouldn't solve the problem of an outdated VPN client configuration on Client1. * **D. Enable BGP on VPNGW1:** BGP is typically used for dynamic routing in large-scale networks. This problem is focused on a client's inability to connect after a topology change, a task best handled by reinstalling the VPN client configuration. **Note:** While the discussion overwhelmingly supports option A, there might be other less likely scenarios that could cause this issue. The provided solution is based solely on the information and discussion presented.
187
**** [View Question](https://www.examtopics.com/discussions/databricks/view/29470-exam-az-104-topic-5-question-64-discussion/) You have two Azure virtual networks named VNet1 and VNet2. VNet1 contains an Azure virtual machine named VM1. VNet2 contains an Azure virtual machine named VM2. VM1 hosts a frontend application that connects to VM2 to retrieve data. Users report that the frontend application is slower than usual. You need to view the average round-trip time (RTT) of the packets from VM1 to VM2. Which Azure Network Watcher feature should you use? A. IP flow verify B. Connection troubleshoot C. Connection monitor D. NSG flow logs **
** C. Connection monitor **Explanation:** The Azure Network Watcher Connection Monitor feature is designed to proactively monitor the end-to-end network connectivity between two points (in this case, VM1 and VM2). It measures the round-trip time (RTT) of packets, providing insights into latency and network performance. The other options are not suitable for this specific task: * **A. IP flow verify:** This feature verifies whether network traffic can flow between two points, but it doesn't provide RTT information. * **B. Connection troubleshoot:** This feature helps diagnose connectivity issues but doesn't provide continuous RTT monitoring. * **D. NSG flow logs:** These logs record network traffic that passes through a Network Security Group (NSG), but they don't directly measure RTT. The overwhelming consensus in the provided discussion supports option C as the correct answer. While there is no explicit disagreement, the discussion shows a range of users confirming the answer based on their experience and research.
188
[View Question](https://www.examtopics.com/discussions/databricks/view/29474-exam-az-104-topic-5-question-8-discussion/) You have an Azure subscription that contains a virtual network named VNet1. VNet1 contains four subnets named Gateway, Perimeter, NVA, and Production. The NVA subnet contains two network virtual appliances (NVAs) that will perform network traffic inspection between the Perimeter subnet and the Production subnet. You need to implement an Azure load balancer for the NVAs. The solution must meet the following requirements: ✑ The NVAs must run in an active-active configuration that uses automatic failover. ✑ The load balancer must load balance traffic to two services on the Production subnet. The services have different IP addresses. Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Deploy a basic load balancer B. Deploy a standard load balancer C. Add two load balancing rules that have HA Ports and Floating IP enabled D. Add two load balancing rules that have HA Ports enabled and Floating IP disabled E. Add a frontend IP configuration, a backend pool, and a health probe F. Add a frontend IP configuration, two backend pools, and a health probe
B, C, and F * **B. Deploy a standard load balancer:** A standard load balancer is necessary to support high availability (HA) ports and floating IPs, which are crucial for the active-active configuration with automatic failover. Basic load balancers lack these advanced features. * **C. Add two load balancing rules that have HA Ports and Floating IP enabled:** HA ports enable quick failover between NVAs, while a floating IP ensures continuous service during failover, maintaining the same public IP address regardless of which NVA is active. * **F. Add a frontend IP configuration, two backend pools, and a health probe:** Two backend pools are required because the load balancer needs to distribute traffic to two different services with separate IP addresses on the Production subnet. The frontend IP configuration provides the public IP address, and a health probe monitors the availability of the services in the backend pools. **Why other options are incorrect:** * **A. Deploy a basic load balancer:** Basic load balancers lack the advanced features required for high availability and failover, as explained above. * **D. Add two load balancing rules that have HA Ports enabled and Floating IP disabled:** While HA ports are needed, disabling floating IPs negates the automatic failover benefit; the public IP address would change during a failover, interrupting service. * **E. Add a frontend IP configuration, a backend pool, and a health probe:** This option is insufficient because it only uses one backend pool, whereas two services need to be load balanced. **Note:** The provided discussion shows consensus on the correct answer.
189
[View Question](https://www.examtopics.com/discussions/databricks/view/29475-exam-az-104-topic-10-question-1-discussion/) You need to implement a backup solution for App1 after the application is moved. What should you create first? A. a recovery plan B. an Azure Backup Server C. a backup policy D. a Recovery Services vault
D. a Recovery Services vault A Recovery Services vault is a logical container in Azure that stores backup data. Before you can back up any resources (like the VMs of App1), you must first create this container to store the backup data. Creating a recovery plan, configuring Azure Backup Server, or defining a backup policy all require a Recovery Services vault to exist already. Why other options are incorrect: * **A. a recovery plan:** A recovery plan is created *after* the backup infrastructure is in place (including the vault). It defines the steps to restore from a backup, not the place to store the backup itself. * **B. an Azure Backup Server:** An Azure Backup Server is an on-premises component used for backing up on-premises resources to Azure. The question implies the App1 VMs are already in Azure, so this is not the first step. * **C. a backup policy:** A backup policy defines the frequency and retention of backups. You need a place (the vault) to store those backups before defining a policy to manage them. Note: The provided discussion shows unanimous agreement on the correct answer.
190
[View Question](https://www.examtopics.com/discussions/databricks/view/29793-exam-az-104-topic-10-question-2-discussion/) You need to move the blueprint files to Azure. What should you do? A. Generate an access key. Map a drive, and then copy the files by using File Explorer. B. Use Azure Storage Explorer to copy the files. C. Use the Azure Import/Export service. D. Generate a shared access signature (SAS). Map a drive, and then copy the files by using File Explorer.
The correct answer is B: Use Azure Storage Explorer to copy the files. Azure Storage Explorer provides a user-friendly interface for interacting with Azure storage services, making it efficient to copy files to Azure Blob storage. This directly addresses the requirement to move blueprint files to Azure. Why other options are incorrect: * **A. Generate an access key. Map a drive, and then copy the files by using File Explorer:** While technically possible, this method is less efficient and more complex than using Azure Storage Explorer. It requires manual key management and potentially exposes the access key unnecessarily. * **C. Use the Azure Import/Export service:** The Azure Import/Export service is designed for transferring large amounts of data offline using physical disks. This is overkill for moving blueprint files, which are likely to be relatively small. * **D. Generate a shared access signature (SAS). Map a drive, and then copy the files by using File Explorer:** Similar to option A, this introduces unnecessary complexity. While a SAS provides secure access, using Azure Storage Explorer is simpler and more streamlined. Note: The provided discussion highlights several requirements beyond simply moving the files (archival storage tier, secured and temporary partner access). While option B is the best answer based on the primary question, a complete solution would need to consider these additional requirements.
191
[View Question](https://www.examtopics.com/discussions/databricks/view/33398-exam-az-104-topic-5-question-14-discussion/) You have an Azure web app named webapp1. You have a virtual network named VNET1 and an Azure virtual machine named VM1 that hosts a MySQL database. VM1 connects to VNET1. You need to ensure that webapp1 can access the data hosted on VM1. What should you do? A. Deploy an internal load balancer B. Peer VNET1 to another virtual network C. Connect webapp1 to VNET1 D. Deploy an Azure Application Gateway
C. Connect webapp1 to VNET1. To allow webapp1 to access the MySQL database hosted on VM1 within VNET1, the web app needs to be connected to the same virtual network. This allows direct communication between the web app and the VM, provided appropriate network security rules are in place. Why other options are incorrect: * **A. Deploy an internal load balancer:** Load balancers distribute traffic across multiple instances. This is unnecessary since there's only one MySQL database instance. * **B. Peer VNET1 to another virtual network:** Peering connects two virtual networks, but doesn't directly solve the problem of webapp1 accessing resources within VNET1. This would add complexity without addressing the core issue. * **D. Deploy an Azure Application Gateway:** Application gateways are used for advanced traffic management, load balancing, and security at the application layer. This is overkill for a simple database access scenario. Note: The discussion highlights some disagreement around whether a web app can directly connect to a VNET containing other resources. Some users argue that the VNET must be empty, while others state that it's possible to create a subnet within the existing VNET for the web app. The consensus, however, leans towards option C as the most effective solution, leveraging VNET integration features of Azure App Service.
192
[View Question](https://www.examtopics.com/discussions/databricks/view/33474-exam-az-104-topic-4-question-9-discussion/) You plan to automate the deployment of a virtual machine scale set that uses the Windows Server 2016 Datacenter image. You need to ensure that when the scale set virtual machines are provisioned, they have web server components installed. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Upload a configuration script B. Create an automation account C. Create an Azure policy D. Modify the extensionProfile section of the Azure Resource Manager template E. Create a new virtual machine scale set in the Azure portal
A and D The correct answer is A and D because: * **A. Upload a configuration script:** This allows you to provide a script that installs the necessary web server components after the VM is created. This is a standard practice for post-deployment configuration. The script would contain the commands to install the web server software (e.g., IIS). * **D. Modify the extensionProfile section of the Azure Resource Manager template:** The `extensionProfile` section in an ARM template is where you define custom extensions for your VMs. A custom script extension (as described in the discussion) can be added here to execute the configuration script uploaded in option A. This ensures the script runs automatically during VM provisioning. Option B is incorrect because while Azure Automation can manage VMs, it's not directly involved in the initial provisioning of web server components onto the VM scale set. Option C is incorrect because Azure Policy focuses on governance and compliance, not the installation of software. Option E is incorrect because creating the scale set is a prerequisite, not a solution to installing the web server components. The question asks *how to ensure* the components are installed *when* the VMs are provisioned, not how to create the scale set itself.
193
[View Question](https://www.examtopics.com/discussions/databricks/view/33509-exam-az-104-topic-5-question-39-discussion/) You have the Azure virtual network named VNet1 that contains a subnet named Subnet1. Subnet1 contains three Azure virtual machines. Each virtual machine has a public IP address. The virtual machines host several applications that are accessible over port 443 to users on the Internet. Your on-premises network has a site-to-site VPN connection to VNet1. You discover that the virtual machines can be accessed by using the Remote Desktop Protocol (RDP) from the Internet and from the on-premises network. You need to prevent RDP access to the virtual machines from the Internet, unless the RDP connection is established from the on-premises network. The solution must ensure that all the applications can still be accessed by the Internet users. What should you do? A. Modify the address space of the local network gateway B. Create a deny rule in a network security group (NSG) that is linked to Subnet1 C. Remove the public IP addresses from the virtual machines D. Modify the address space of Subnet1
B. Create a deny rule in a network security group (NSG) that is linked to Subnet1. Explanation: The question requires blocking RDP access from the internet while maintaining access from the on-premises network and application access via port 443. An NSG allows granular control over inbound and outbound traffic to a subnet. Creating a deny rule specifically for RDP (port 3389) in the NSG associated with Subnet1 will prevent internet-based RDP connections while allowing RDP access from the on-premises network via the VPN and leaving port 443 open for application access. Why other options are incorrect: * **A. Modify the address space of the local network gateway:** Changing the address space of the local network gateway affects the VPN connection itself, not the access control to the VMs within the subnet. This is unrelated to the problem. * **C. Remove the public IP addresses from the virtual machines:** This would completely prevent internet access, including access to the applications on port 443, which is not allowed. * **D. Modify the address space of Subnet1:** Changing the subnet's address space will require significant reconfiguration and doesn't directly address the RDP access control requirement. Note: The provided solution is widely accepted. There's no visible disagreement in the discussion.
194
[View Question](https://www.examtopics.com/discussions/databricks/view/33739-exam-az-104-topic-4-question-13-discussion/) You create an App Service plan named Plan1 and an Azure web app named webapp1. You discover that the option to create a staging slot is unavailable. You need to create a staging slot for Plan1. What should you do first? A. From Plan1, scale up the App Service plan B. From webapp1, modify the Application settings C. From webapp1, add a custom domain D. From Plan1, scale out the App Service plan
A. From Plan1, scale up the App Service plan. To enable staging slots in an Azure App Service, the app service plan must be in the Standard, Premium, or Isolated tier. Scaling up the App Service plan (A) changes its tier, allowing for features like staging slots. Scaling out (D) increases the number of instances but doesn't change the tier. Modifying application settings (B) or adding a custom domain (C) are unrelated to enabling staging slots. Why other options are incorrect: * **B. From webapp1, modify the Application settings:** Application settings do not affect the availability of staging slots; this is controlled at the App Service Plan level. * **C. From webapp1, add a custom domain:** Adding a custom domain is unrelated to enabling staging slots. * **D. From Plan1, scale out the App Service plan:** Scaling out only increases the number of instances; it does not change the pricing tier, which is the requirement for enabling staging slots. Note: The provided discussion shows agreement on the correct answer.
195
[View Question](https://www.examtopics.com/discussions/databricks/view/33741-exam-az-104-topic-4-question-14-discussion/) You plan to move a distributed on-premises app named App1 to an Azure subscription. After the planned move, App1 will be hosted on several Azure virtual machines. You need to ensure that App1 always runs on at least eight virtual machines during planned Azure maintenance. What should you create? A. one virtual machine scale set that has 10 virtual machines instances B. one Availability Set that has three fault domains and one update domain C. one Availability Set that has 10 update domains and one fault domain D. one virtual machine scale set that has 12 virtual machines instances
A. one virtual machine scale set that has 10 virtual machines instances A Virtual Machine Scale Set (VMSS) is the correct answer because it provides automatic scaling and high availability. During planned maintenance, a VMSS ensures that only a subset of the VMs are updated at any given time. The question specifies needing at least 8 VMs running during maintenance. A VMSS with 10 instances allows for some VMs to undergo updates while still leaving the required 8 online. The default update domain behavior in Azure for VMSS limits the simultaneous upgrade to at most 20% of the instances. Therefore, a 10-instance VMSS would have a maximum of 2 instances undergoing updates simultaneously, guaranteeing 8 instances remain operational. Why other options are incorrect: * **B and C:** Availability Sets provide redundancy but do not inherently manage rolling updates. They offer placement across fault and update domains, but it's manual management of updates, not automatic like a VMSS. This would not guarantee 8 VMs remain running during maintenance. * **D:** While a 12-instance VMSS would also be sufficient in terms of high availability (allowing for more VMs undergoing updates simultaneously without dipping below 8), option A is a more economical approach. Note: The discussion highlights a potential discrepancy related to fault domain distribution in VMSS across different Azure regions. This doesn't impact the core correctness of choosing a VMSS as the solution to maintain minimum uptime during planned maintenance.
196
[View Question](https://www.examtopics.com/discussions/databricks/view/33742-exam-az-104-topic-4-question-16-discussion/) You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the Overview blade, you move the virtual machine to a different subscription. Does this meet the goal? A. Yes B. No
B. No Moving VM1 to a different subscription does not immediately move it to a different host and avoid the scheduled maintenance. Changing subscriptions only changes the billing account associated with the VM; it doesn't relocate the VM to a different physical host. To avoid the maintenance, the VM would need to be redeployed, ideally within an Availability Set configured with Update Domains to ensure high availability and planned maintenance mitigation. The original solution of moving the VM to a different subscription will not solve the immediate maintenance issue. Other Options: A. Yes - Incorrect. Moving the VM to a different subscription does *not* solve the problem of imminent maintenance. The maintenance event will still impact the VM, regardless of the subscription change.
197
[View Question](https://www.examtopics.com/discussions/databricks/view/33744-exam-az-104-topic-5-question-33-discussion/) You have a computer named Computer1 that has a point-to-site VPN connection to an Azure virtual network named VNet1. The point-to-site connection uses a self-signed certificate. From Azure, you download and install the VPN client configuration package on a computer named Computer2. You need to ensure that you can establish a point-to-site VPN connection to VNet1 from Computer2. Solution: You modify the Azure Active Directory (Azure AD) authentication policies. Does this meet the goal? A. Yes B. No
B. No. Modifying Azure Active Directory (Azure AD) authentication policies will not enable a point-to-site VPN connection using a self-signed certificate. The point-to-site VPN connection uses a self-signed certificate, which is a client certificate. The correct solution is to export the client certificate from Computer1 and install it on Computer2. Azure AD authentication is not relevant to this type of VPN connection. Why other options are incorrect: A. Incorrect. Modifying Azure AD authentication policies doesn't address the issue of installing the necessary self-signed certificate on Computer2 to establish the VPN connection. The problem is certificate distribution, not authentication policy.
198
[View Question](https://www.examtopics.com/discussions/databricks/view/34969-exam-az-104-topic-4-question-17-discussion/) You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the Redeploy blade, you click Redeploy. Does this meet the goal? A. Yes B. No
A. Yes Redeploying a VM in Azure moves it to a different host within the Azure infrastructure. This action immediately addresses the need to move VM1 to avoid maintenance impacts, fulfilling the stated goal. The redeployment process preserves the VM's configuration and associated resources. Why other options are incorrect: B. No: This is incorrect because redeploying the VM *does* achieve the goal of moving it to a different host immediately. Note: The provided discussion does not show any disagreement on the correctness of answer A.
199
**** [View Question](https://www.examtopics.com/discussions/databricks/view/34970-exam-az-104-topic-6-question-10-discussion/) Your company has a main office in London that contains 100 client computers. Three years ago, you migrated to Azure Active Directory (Azure AD). The company's security policy states that all personal devices and corporate-owned devices must be registered or joined to Azure AD. A remote user named User1 is unable to join a personal device to Azure AD from a home network. You verify that User1 was able to join devices to Azure AD in the past. You need to ensure that User1 can join the device to Azure AD. What should you do? A. Assign the User administrator role to User1. B. From the Device settings blade, modify the Maximum number of devices per user setting. C. Create a point-to-site VPN from the home network of User1 to Azure. D. From the Device settings blade, modify the Users may join devices to Azure AD setting. **
** B. From the Device settings blade, modify the Maximum number of devices per user setting. **Explanation:** The most likely reason User1 can no longer join a device to Azure AD, despite having done so previously, is that they've reached the maximum number of devices allowed per user. The default is often 50. Increasing this limit in the Azure AD Device settings will resolve the issue. The fact that User1 *could* join devices in the past supports this as the most probable cause. **Why other options are incorrect:** * **A. Assign the User administrator role to User1:** This is unnecessary. User1 only needs to register their device; they don't need administrative privileges in Azure AD to do so. * **C. Create a point-to-site VPN from the home network of User1 to Azure:** A VPN is used for secure network connectivity. While a VPN might be necessary for secure access to corporate resources, it doesn't address the problem of User1 reaching the maximum number of registered devices in Azure AD. This option is suggested by a commenter as a possibility if User1 changed home networks, however the question states User1 was able to connect previously. * **D. From the Device settings blade, modify the Users may join devices to Azure AD setting:** This setting controls whether users *can* register devices at all. Since User1 could previously register devices, this setting is already enabled. **Note:** There is a discussion among users regarding the possibility that User1 may have changed home networks, which could require a VPN (option C). However, the problem description suggests a more likely scenario of reaching the device registration limit.
200
**** [View Question](https://www.examtopics.com/discussions/databricks/view/35085-exam-az-104-topic-2-question-18-discussion/) You have an Azure Directory (Azure AD) tenant named Adatum and an Azure Subscription named Subscription1. Adatum contains a group named Developers. Subscription1 contains a resource group named Dev. You need to provide the Developers group with the ability to create Azure logic apps in the Dev resource group. Solution: On Subscription1, you assign the DevTest Labs User role to the Developers group. Does this meet the goal? A. Yes B. No **
** B. No The DevTest Labs User role does not grant permissions to create or manage Azure Logic Apps. This role is specifically for managing virtual machines within an Azure DevTest Labs environment. Creating Logic Apps requires a role with permissions to the Logic App service, such as the Logic App Contributor role. The discussion shows overwhelming agreement on this point. **Why other options are incorrect:** A. Yes is incorrect because the DevTest Labs User role is not relevant to Logic App management. It only provides access to manage VMs within a DevTest Labs environment.
201
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38107-exam-az-104-topic-2-question-5-discussion/) You have the Azure management groups shown in the following table: | Management Group ID | Name | Parent Management Group ID | | ------------------- | --------------- | ------------------------- | | MG1 | Group1 | None | | MG2 | Group2 | MG1 | You add Azure subscriptions to the management groups as shown in the following table: | Subscription ID | Name | Management Group ID | | --------------- | --------------- | ------------------- | | Sub1 | Subscription1 | MG1 | | Sub2 | Subscription2 | MG2 | You create the Azure policies shown in the following table: | Policy Name | Policy Assignment Scope | Not Allowed Resource Types | | ------------------------------------------- | ----------------------- | ------------------------ | | Deny Virtual Networks in Group1 | MG1 | Virtual networks | | Deny Virtual Networks in all Subscriptions | Subscription1 | Virtual networks | | Deny Virtual Machines and Virtual Networks | MG2 | Virtual machines, Virtual networks | For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. You can create a virtual network in Subscription1. 2. You can create a virtual machine in Subscription2. 3. You can add Subscription1 to management group MG2. **
** No, Yes, No 1. **No:** The policy "Deny Virtual Networks in Group1" applied to MG1 prevents the creation of virtual networks in Subscription1, which is under MG1. The additional policy applied directly to Subscription1 further enforces this restriction. 2. **Yes:** While the policy "Deny Virtual Machines and Virtual Networks" is applied to MG2 (which contains Subscription2), the discussion highlights that this policy only prevents *new* virtual networks. Existing virtual networks are unaffected by this policy; therefore, you can still create a virtual machine *if* an existing virtual network is already present in Subscription2. 3. **No:** A subscription can only belong to one management group. Subscription1 is already associated with MG1, and it cannot be added to MG2. While it's possible to *move* a subscription between management groups, simply adding it to another is not allowed. **Explanation of Disagreement:** The discussion shows some disagreement about the answers, particularly regarding whether the answers should be all "No." The provided answer reflects the most widely accepted interpretation based on the explanation of Azure policy inheritance and the limitations on subscription management group memberships. The disagreement mostly stems from varying understanding of how Azure policies interact, particularly in the case of virtual machines in Subscription2 and the possibility of creating a VM leveraging an existing VNet. The correct answer clarifies the existing VNet possibility which is not explicitly stated but is implied.
202
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38114-exam-az-104-topic-2-question-8-discussion/) You have an Azure subscription named AZPT1 that contains the resources shown in the following table: | Resource Name | Resource Type | Resource Group | Location | |---|---|---|---| | VM1 | Microsoft.Compute/virtualMachines | RG1 | WestUS | | storage1 | Microsoft.Storage/storageAccounts | RG1 | WestUS | | VNET1 | Microsoft.Network/virtualNetworks | RG1 | WestUS | | VM1Managed | Microsoft.Compute/virtualMachines | RG1 | WestUS | | RVAULT1 | Microsoft.RecoveryServices/vaults | RG2 | WestUS | You create a new Azure subscription named AZPT2. You need to identify which resources can be moved to AZPT2. Which resources should you identify? A. VM1, storage1, VNET1, and VM1Managed only B. VM1 and VM1Managed only C. VM1, storage1, VNET1, VM1Managed, and RVAULT1 D. RVAULT1 only **
** C. VM1, storage1, VNET1, VM1Managed, and RVAULT1 **Explanation:** Based on the provided text and the highly upvoted responses in the discussion, all listed resources (VM1, storage1, VNET1, VM1Managed, and RVAULT1) can be moved to a new Azure subscription (AZPT2). Moving a resource only changes its resource group or subscription, not its location. The discussion links to Microsoft documentation supporting this. **Why other options are incorrect:** * **A. VM1, storage1, VNET1, and VM1Managed only:** This option omits RVAULT1, which, according to the information provided and the suggested answer, is also movable. * **B. VM1 and VM1Managed only:** This option is too restrictive; it excludes other resources that are also movable. * **D. RVAULT1 only:** This option is too restrictive; it excludes other resources that are also movable. **Note:** While the suggested answer and the highly upvoted comments support option C, a comment by klexams mentions limitations on moving some resources (e.g., standard load balancers). This suggests there might be exceptions not covered in this specific scenario. The answer provided here reflects the consensus within the provided discussion.
203
[View Question](https://www.examtopics.com/discussions/databricks/view/38115-exam-az-104-topic-5-question-23-discussion/) Your company has an Azure subscription named Subscription1. The company also has two on-premises servers named Server1 and Server2 that run Windows Server 2016. Server1 is configured as a DNS server that has a primary DNS zone named adatum.com. Adatum.com contains 1,000 DNS records. You manage Server1 and Subscription1 from Server2. Server2 has the following tools installed: ✑ The DNS Manager console ✑ Azure PowerShell ✑ Azure CLI 2.0 You need to move the adatum.com zone to an Azure DNS zone in Subscription1. The solution must minimize administrative effort. What should you use? A. Azure CLI B. Azure PowerShell C. the Azure portal D. the DNS Manager console
A. Azure CLI Explanation: Based on the provided text, the Azure CLI is the correct answer because it's the only option that supports importing zone files into Azure DNS. The discussion explicitly states that "Azure DNS supports importing and exporting zone files by using the Azure command-line interface (CLI). Zone file import is not currently supported via Azure PowerShell or the Azure portal." The `PrivateDNSMigrationScript` is mentioned, but it's irrelevant to this problem as it relates to migrating *legacy Azure DNS private zones*, not on-premises zones. The question is about migrating from an on-premises Windows Server 2016 DNS server to Azure DNS. Why other options are incorrect: * **B. Azure PowerShell:** The discussion clearly states that Azure PowerShell does not currently support importing zone files. * **C. the Azure portal:** The discussion indicates that the Azure portal also lacks this functionality. * **D. the DNS Manager console:** This is a tool for managing on-premises DNS servers and is not relevant for migrating to Azure DNS. Note: There is some disagreement in the discussion regarding the best approach and whether the question accurately reflects the capabilities of different tools. However, the most upvoted and technically accurate response supports the selection of Azure CLI as the solution.
204
[View Question](https://www.examtopics.com/discussions/databricks/view/38126-exam-az-104-topic-2-question-26-discussion/) You have an Azure subscription. Users access the resources in the subscription from either home or from customer sites. From home, users must establish a point-to-site VPN to access the Azure resources. The users on the customer sites access the Azure resources by using site-to-site VPNs. You have a line-of-business-app named App1 that runs on several Azure virtual machines. The virtual machines run Windows Server 2016. You need to ensure that the connections to App1 are spread across all the virtual machines. What are two possible Azure services that you can use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. an internal load balancer B. a public load balancer C. an Azure Content Delivery Network (CDN) D. Traffic Manager E. an Azure Application Gateway
The correct answers are A (an internal load balancer) and E (an Azure Application Gateway). * **Internal Load Balancer (A):** Since users connect via VPNs (both point-to-site and site-to-site), traffic is already within the Azure Virtual Network. An internal load balancer distributes incoming traffic within the VNet to the multiple VMs running App1, ensuring load balancing without exposing App1 to the public internet. * **Azure Application Gateway (E):** This is a robust solution for load balancing web traffic. It offers advanced features like SSL termination and web application firewall capabilities. It's suitable here because it can distribute traffic across the VMs running App1 within the private network accessed via the VPNs. * **Why other options are incorrect:** * **Public Load Balancer (B):** This is unnecessary and exposes App1 to the public internet, which is not required given the VPN connections. * **Azure CDN (C):** CDNs are for caching static content, not load balancing applications. * **Traffic Manager (D):** Traffic Manager directs traffic to different endpoints based on various factors, but it doesn't balance load across multiple VMs at the same endpoint. It's more for geographic distribution or failover. Note: The provided discussion shows consensus on the correct answers.
205
[View Question](https://www.examtopics.com/discussions/databricks/view/38127-exam-az-104-topic-3-question-27-discussion/) You have an app named App1 that runs on two Azure virtual machines named VM1 and VM2. You plan to implement an Azure Availability Set for App1. The solution must ensure that App1 is available during planned maintenance of the hardware hosting VM1 and VM2. What should you include in the Availability Set? A. one update domain B. two fault domains C. one fault domain D. two update domains
D. two update domains Azure Availability Sets distribute VMs across multiple update and fault domains. During planned maintenance, Azure updates only one update domain at a time. To ensure App1 remains available while one update domain is undergoing maintenance, VM1 and VM2 must reside in separate update domains. Therefore, the Availability Set must include at least two update domains. Placing them in different fault domains provides additional resilience against hardware failures but isn't explicitly required to meet the criteria of the question which focuses on *planned maintenance*. Why other options are incorrect: * **A. one update domain:** If both VMs are in the same update domain, planned maintenance will take down both VMs simultaneously, resulting in App1 downtime. * **B. two fault domains:** While fault domains offer redundancy against hardware failures, they don't address the planned maintenance scenario that requires the VMs to be in different update domains. * **C. one fault domain:** Similar to option A, placing both VMs in a single fault domain does not guarantee App1 availability during planned maintenance. Note: The provided solution is based on the discussion which describes the concepts of update and fault domains within Azure Availability Sets. No conflicting opinions were present in the provided discussion.
206
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38132-exam-az-104-topic-2-question-6-discussion/) You have an Azure policy as shown in the following exhibit: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0004500001.png) What is the effect of the policy? A. You are prevented from creating Azure SQL servers anywhere in Subscription 1. B. You can create Azure SQL servers in ContosoRG1 only. C. You are prevented from creating Azure SQL Servers in ContosoRG1 only. D. You can create Azure SQL servers in any resource group within Subscription 1. **
** B. You can create Azure SQL servers in ContosoRG1 only. The Azure policy, as shown in the image, prevents the creation of Azure SQL servers in Subscription 1. However, it includes an exclusion for the resource group "ContosoRG1". This means the policy's restriction does *not* apply to ContosoRG1. Therefore, you can only create Azure SQL servers within ContosoRG1. **Why other options are incorrect:** * **A. You are prevented from creating Azure SQL servers anywhere in Subscription 1:** This is incorrect because the policy has an exclusion for ContosoRG1. * **C. You are prevented from creating Azure SQL Servers in ContosoRG1 only:** This is incorrect because the exclusion specifically allows creation of Azure SQL servers in ContosoRG1. * **D. You can create Azure SQL servers in any resource group within Subscription 1:** This is incorrect because the policy generally prevents creation except for the explicitly excluded ContosoRG1. **Note:** The discussion shows overwhelming agreement on option B as the correct answer.
207
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38137-exam-az-104-topic-3-question-19-discussion/) You plan to use the Azure Import/Export service to copy files to a storage account. Which two files should you create before you prepare the drives for the import job? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. an XML manifest file B. a dataset CSV file C. a JSON configuration file D. a PowerShell PS1 file E. a driveset CSV file **
** B and E The correct answer is B (a dataset CSV file) and E (a driveset CSV file). The Azure Import/Export service uses these two CSV files to define the data to be imported and the physical drives containing that data, respectively. The `dataset.csv` file specifies the source files and their destination in the Azure storage account. The `driveset.csv` file maps the physical drives to the data described in the `dataset.csv` file. **Why other options are incorrect:** * **A. an XML manifest file:** While Azure services often utilize XML manifests, the Azure Import/Export service uses CSV files for this purpose. * **C. a JSON configuration file:** JSON is a common configuration format, but it is not used by the Azure Import/Export service for defining the import data and drives. * **D. a PowerShell PS1 file:** PowerShell scripts can *interact* with the Azure Import/Export service, but they are not files *created* as part of preparing the import job itself. **Note:** The discussion shows unanimous agreement on the correct answer.
208
[View Question](https://www.examtopics.com/discussions/databricks/view/38162-exam-az-104-topic-3-question-7-discussion/) You have an Azure subscription that contains the resources shown in the following table. | Resource Group | Resource Type | Resource Name | Status | | -------------- | ---------------------- | -------------- | ------- | | RG1 | Microsoft.Compute/virtualMachines | VM1 | Running | | RG2 | Microsoft.Network/virtualNetworks | VNET1 | | The status of VM1 is Running. You assign an Azure policy as shown in the exhibit. (The exhibit shows a policy that denies actions on `Microsoft.ClassicNetwork/virtualNetworks`, `Microsoft.Network/virtualNetworks`, and `Microsoft.Compute/virtualMachines` resources within Resource Group RG2). You assign the policy by using the following parameters: Microsoft.ClassicNetwork/virtualNetworks Microsoft.Network/virtualNetworks Microsoft.Compute/virtualMachines For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 1. Moving VM1 to RG2 will succeed. 2. The policy will prevent the creation of new virtual machines in RG2. 3. Modifying the settings of VM1 will succeed.
N Y N The correct answer is N Y N. The discussion reveals some disagreement about the precise effect of the policy, but the most widely supported and tested answer aligns with this option. * **Statement 1 (N):** Moving VM1 to RG2 will fail. The policy denies actions on `Microsoft.Compute/virtualMachines` within RG2. Moving a VM is considered a modification or creation within the target resource group, and therefore prohibited by the policy. * **Statement 2 (Y):** The policy will prevent the creation of new virtual machines in RG2. The policy explicitly denies actions on `Microsoft.Compute/virtualMachines` resources in RG2. Creating a new VM falls under this scope. * **Statement 3 (N):** Modifying the settings of VM1 will fail. While VM1 already exists outside of RG2, modifying its settings (e.g., scaling, updating configurations) is still considered an action on a `Microsoft.Compute/virtualMachines` resource. Since the policy is applied at the resource group level and VM1 is *moved* into RG2, the modification will be blocked. The policy's effect on *existing* resources is a point of contention in the discussion, some suggest it only affects *new* resources, but the majority of testing supports the conclusion that modifications are also blocked. **Why other options are incorrect:** The discussion shows several users initially providing incorrect answers (NNN), reflecting some confusion about how Azure policies affect existing vs. new resources and actions like moving resources. The suggested answer of Y N N is also incorrect based on the majority of the testing data and reasoning provided in the discussion. The prevailing view, backed by lab testing, supports N Y N as the correct answer.
209
[View Question](https://www.examtopics.com/discussions/databricks/view/38169-exam-az-104-topic-5-question-41-discussion/) You have an Azure subscription named Subscription1 that contains an Azure virtual network named VNet1. VNet1 connects to your on-premises network by using Azure ExpressRoute. You plan to prepare the environment for automatic failover in case of ExpressRoute failure. You need to connect VNet1 to the on-premises network by using a site-to-site VPN. The solution must minimize cost. Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Create a connection B. Create a local site VPN gateway C. Create a VPN gateway that uses the VpnGw1 SKU D. Create a gateway subnet E. Create a VPN gateway that uses the Basic SKU
A, B, and C. To establish a site-to-site VPN connection for failover, you need: A. **Create a connection:** This establishes the link between your on-premises VPN gateway and the Azure VPN gateway. B. **Create a local site VPN gateway:** This is the gateway on your on-premises network that connects to the Azure VPN gateway. C. **Create a VPN gateway that uses the VpnGw1 SKU:** A VPN gateway is required on the Azure side. The VpnGw1 SKU is necessary because the Basic SKU cannot coexist with ExpressRoute. Using VpnGw1 ensures compatibility. D and E are incorrect: D. **Create a gateway subnet:** This is likely already created because VNet1 is already connected via ExpressRoute. Creating another would be redundant and increase costs unnecessarily. E. **Create a VPN gateway that uses the Basic SKU:** This is explicitly stated as incompatible with ExpressRoute in the discussion. Note: There is some disagreement in the discussion about whether BCD could also be a correct answer. However, based on the explanation provided by the highly voted responses and the understanding that a gateway subnet likely already exists due to the existing ExpressRoute connection, ABC is the most accurate and cost-effective solution.
210
[View Question](https://www.examtopics.com/discussions/databricks/view/38184-exam-az-104-topic-3-question-26-discussion/) You have an Azure subscription that contains an Azure Storage account. You plan to create an Azure container instance named container1 that will use a Docker image named Image1. Image1 contains a Microsoft SQL Server instance that requires persistent storage. You need to configure a storage service for Container1. What should you use? A. Azure Files B. Azure Blob storage C. Azure Queue storage D. Azure Table storage
A. Azure Files Azure Files is the appropriate choice for persistent storage required by a SQL Server instance running within an Azure Container Instance (ACI). Azure Files provides file shares that can be mounted as persistent volumes within the container, ensuring data persistence even if the container is restarted or replaced. The discussion strongly supports this, with multiple users citing the need for persistent storage and highlighting Azure Files as the solution. One user even provided a link (though later deemed outdated, the principle remains) to Microsoft documentation supporting this. Why other options are incorrect: * **B. Azure Blob storage:** Blob storage is designed for unstructured data storage, not for the persistent file system needed by a SQL Server instance. While AKS (Azure Kubernetes Service) might support blobs for persistent storage in certain configurations, this question explicitly refers to ACI. * **C. Azure Queue storage:** Queue storage is a message queuing service; it's not suitable for persistent file storage. * **D. Azure Table storage:** Table storage is a NoSQL database; it's not designed for file system persistence. **Note:** The discussion includes some debate regarding the most up-to-date and relevant documentation. While some links provided are outdated, the overall consensus points to Azure Files as the correct answer for this specific scenario of persistent storage within an ACI for a SQL Server instance.
211
[View Question](https://www.examtopics.com/discussions/databricks/view/38185-exam-az-104-topic-2-question-17-discussion/) You have an Azure Active Directory (Azure AD) tenant named contosocloud.onmicrosoft.com. Your company has a public DNS zone for contoso.com. You add contoso.com as a custom domain name to Azure AD. You need to ensure that Azure can verify the domain name. Which type of DNS record should you create? A. MX B. NSEC C. PTR D. RRSIG
The correct answer is **A. MX**, although there is some disagreement in the discussion. While a TXT record is generally recommended and often used for domain verification in Azure AD, the discussion indicates that an MX record can also be used for this purpose. Azure provides the necessary value to add to either a TXT or MX record during the verification process. The MX record is used to specify mail exchange servers, and while updating it can disrupt email service, it is a valid option for verification *temporarily*. After verification, the record can be removed or updated to its correct value. Why other options are incorrect: * **B. NSEC (Next Secure):** NSEC records are used for DNSSEC (DNS Security Extensions) to provide information about which resource records are *not* present in the zone. They are not directly related to domain verification. * **C. PTR (Pointer):** PTR records map an IP address to a hostname, which is not relevant to verifying domain ownership. * **D. RRSIG (Resource Record Signature):** RRSIG records are part of DNSSEC and are used to digitally sign DNS records. They are unrelated to domain verification. Note: The discussion shows a debate between using TXT and MX records for Azure AD domain verification. While the consensus is that TXT is generally preferred, this question's answer reflects that MX is a valid (though less common and potentially disruptive) option according to the provided discussion.
212
[View Question](https://www.examtopics.com/discussions/databricks/view/38200-exam-az-104-topic-5-question-30-discussion/) You plan to create an Azure virtual machine named VM1 that will be configured as shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0039100001.png) The planned disk configurations for VM1 are shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0039200001.jpg) You need to ensure that VM1 can be created in an Availability Zone. Which two settings should you modify? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Use managed disks B. OS disk type C. Availability options D. Size E. Image
A and C A. **Use managed disks:** Managed disks are required for placing VMs in Availability Zones. Using unmanaged disks prevents the VM from being placed in an Availability Zone. C. **Availability options:** To place a VM in an Availability Zone, you must select the Availability Zone within the VM's settings during creation or configuration. This is typically found within a high availability section or similar. B. **OS disk type:** While the OS disk type is a VM setting, it doesn't directly prevent placement in an Availability Zone. The type of disk (e.g., Standard HDD, Premium SSD) doesn't prohibit AZ placement if managed disks are used. D. **Size:** The size of the VM is also unrelated to its ability to be deployed within an Availability Zone. E. **Image:** The source image used to create the VM does not prevent AZ deployment. Note: The provided discussion shows some agreement on the correct answer. There's no explicit disagreement shown.
213
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38206-exam-az-104-topic-5-question-31-discussion/) HOTSPOT - You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0039500001.png) *(Image shows a table with Resource Group names (RG1, RG2, RG3) and Locations (West US, East US, West Europe) for various resources, including a VMSS1 in RG1, West US)* VMSS1 is set to VM (virtual machines) orchestration mode. You need to deploy a new Azure virtual machine named VM1, and then add VM1 to VMSS1. Which resource group and location should you use to deploy VM1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0039500002.png) *(Image is a hot area selection box with two selection options)* **
** * **Box 1:** RG1 * **Box 2:** West US **Explanation:** While the question allows for VM1 to be in any of the resource groups, best practice and Microsoft's recommendations strongly suggest placing VM1 in the same resource group as VMSS1 (RG1). This aligns resources with shared lifecycles for easier management. The VM must also be in the same region (West US) as the VMSS1 to be added. **Why other options are incorrect:** * **Box 1: RG2 or RG3:** Although technically possible to add a VM from a different resource group to a VMSS in VM orchestration mode, it is not the recommended practice. Microsoft best practices suggest keeping related resources in the same resource group for simplified management and lifecycle control. The discussion highlights this recommendation. * **Box 2: East US or West Europe:** The VM must reside in the same region as the VMSS to be added. VMSS1 is located in West US, therefore, VM1 must also be in West US. **Note:** The discussion shows some disagreement on whether *only* RG1 is correct for Box 1. Some users suggest any RG is acceptable, but others strongly advocate for RG1 based on best practice and alignment with Microsoft's recommendations. The answer provided reflects the best practice and the most commonly accepted solution based on the discussion.
214
[View Question](https://www.examtopics.com/discussions/databricks/view/38220-exam-az-104-topic-2-question-27-discussion/) You have an Azure subscription. You have 100 Azure virtual machines. You need to quickly identify underutilized virtual machines that can have their service tier changed to a less expensive offering. Which blade should you use? A. Monitor B. Advisor C. Metrics D. Customer insights
B. Advisor The Advisor blade in Azure provides personalized recommendations to optimize your Azure resources, including cost optimization. It analyzes your resource usage and suggests ways to reduce costs, such as identifying underutilized VMs that could be moved to a less expensive service tier. This directly addresses the problem described in the question. Why other options are incorrect: * **A. Monitor:** The Monitor blade provides monitoring data, but doesn't offer recommendations for optimization. It shows what's happening, but doesn't suggest improvements. * **C. Metrics:** Metrics show performance data, similar to Monitor. They don't proactively suggest cost-saving changes. * **D. Customer insights:** This is unrelated to Azure resource management or cost optimization. Note: The discussion unanimously agrees on the correct answer.
215
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38233-exam-az-104-topic-2-question-33-discussion/) You need to ensure that an Azure Active Directory (Azure AD) user named Admin1 is assigned the required role to enable Traffic Analytics for an Azure subscription. Solution: You assign the Network Contributor role at the subscription level to Admin1. Does this meet the goal? A. Yes B. No **
** A. Yes The Network Contributor role at the subscription level allows enabling Traffic Analytics. The provided discussion explicitly states that "Your account must have any one of the following Azure roles at the subscription scope: Owner, Contributor, Reader, or Network Contributor." Therefore, assigning the Network Contributor role to Admin1 fulfills the requirement. **WHY OTHER OPTIONS ARE INCORRECT:** B. No is incorrect because the Network Contributor role, as confirmed by the discussion and referenced documentation, provides sufficient permissions to enable Traffic Analytics. **NOTE:** The answer is based solely on the provided content. There is no indication of conflicting opinions or disagreement within the provided discussion.
216
[View Question](https://www.examtopics.com/discussions/databricks/view/38235-exam-az-104-topic-2-question-34-discussion/) You need to ensure that an Azure Active Directory (Azure AD) user named Admin1 is assigned the required role to enable Traffic Analytics for an Azure subscription. Solution: You assign the Owner role at the subscription level to Admin1. Does this meet the goal? A. Yes B. No
A. Yes Explanation: The provided solution correctly meets the goal. According to the discussion, assigning the Owner role at the subscription level grants the necessary permissions to enable Traffic Analytics. The Owner role encompasses the permissions required by Traffic Analytics. Other roles mentioned (Contributor, Reader, Network Contributor) may also work, depending on their specific permissions. Other Options: B. No - This is incorrect. Assigning the Owner role is sufficient to enable Traffic Analytics, as confirmed by multiple users in the discussion. Note: While the discussion mentions the "least privilege principle," and suggests that the Owner role might not be strictly *required*, it does grant the necessary permissions. Therefore, assigning the Owner role fulfills the stated goal. There appears to be some disagreement on whether the Owner role is the *most* appropriate role based on best practices, but it is sufficient to meet the stated goal.
217
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38238-exam-az-104-topic-3-question-25-discussion/) HOTSPOT - You have an Azure Storage account named storage1 that uses Azure Blob storage and Azure File storage. You need to use AzCopy to copy data to the blob storage and file storage in storage1. Which authentication method should you use for each type of storage? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0018700001.png) *(Image shows two boxes; Box 1 for Blob Storage and Box 2 for File Storage)* **
** For Blob storage (Box 1), both Azure Active Directory (Azure AD) and Shared Access Signature (SAS) tokens are supported authentication methods for AzCopy. For File storage (Box 2), only Shared Access Signature (SAS) tokens are supported. **Explanation:** The provided documentation and the highly upvoted response in the discussion initially indicate that only SAS tokens are supported for File Storage while both Azure AD and SAS tokens are supported for Blob Storage. However, there is conflicting information in the discussion, with some users suggesting that Azure AD is now also supported for File Shares, depending on the target of the operation. The discussion does not definitively resolve this conflict. Therefore, the answer reflects the most commonly accepted and initially documented answer, but acknowledges the ongoing debate. **Why other options are incorrect (based on the primary consensus):** Using only Azure AD for File storage would be incorrect based on the initial documentation and primary consensus within the discussion. Using only Access Keys or other methods not mentioned (e.g., connection strings) would also be incorrect as the question specifically restricts the options to Azure AD and SAS tokens. **Note:** There's a disagreement within the discussion regarding the current support for Azure AD with AzCopy and Azure File Storage. The answer provided reflects the initially presented and widely accepted view, while acknowledging the conflicting information presented by some users in the discussion. The most current documentation should be consulted for the definitive answer.
218
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38239-exam-az-104-topic-3-question-31-discussion/) HOTSPOT - You have an Azure subscription named Subscription1 that contains the resources shown in the following table: | Resource Group | Resource Type | Resource Name | Location | |---|---|---|---| | RG1 | Recovery Services vault | Vault1 | West Europe | | RG2 | Storage Account | storage1 | West Europe | | RG2 | Storage Account | storage2 | West US | | RG3 | Storage Account | storage3 | West Europe | | RG4 | Log Analytics workspace | Analytics1 | East US | | RG5 | Log Analytics workspace | Analytics2 | East US 2 | | RG6 | Log Analytics workspace | Analytics3 | West Europe | You plan to configure Azure Backup reports for Vault1. You are configuring the Diagnostics settings for the AzureBackupReports log. Which storage accounts and which Log Analytics workspaces can you use for the Azure Backup reports of Vault1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** **Storage Accounts:** storage1 and storage3 **Log Analytics Workspaces:** Analytics1, Analytics2, and Analytics3 The location of the Log Analytics workspace is independent of the location of the Recovery Services vault (Vault1). Therefore, any of the listed Log Analytics workspaces can be used. However, only storage accounts in the same region as the vault (West Europe) are usable for storing the logs; thus, only storage1 and storage3 are suitable options. There is some disagreement in the discussion regarding whether only Analytics3 is correct because of government policies, however the documentation and the majority of the discussion suggest that all three Log Analytics workspaces are possible. The correct answer reflects the possible options, not necessarily the recommended ones. **Why other options are incorrect:** * **storage2:** This storage account is located in West US, which is different from the Vault1's location (West Europe). Azure Backup reports require the storage account to be in the same region as the vault. * There are no incorrect Log Analytics workspace options based on the provided information and the consensus in the discussion thread.
219
[View Question](https://www.examtopics.com/discussions/databricks/view/38242-exam-az-104-topic-2-question-35-discussion/) You need to ensure that an Azure Active Directory (Azure AD) user named Admin1 is assigned the required role to enable Traffic Analytics for an Azure subscription. Solution: You assign the Reader role at the subscription level to Admin1. Does this meet the goal? A. Yes B. No
B. No Explanation: While the Reader role allows access to view data, it does not necessarily grant the permissions needed to *enable* Traffic Analytics. The documentation referenced in the discussion indicates that enabling Traffic Analytics requires the Owner, Contributor, Reader, or Network Contributor role at the subscription level. Although the Reader role is listed, the discussion highlights a subtle but important distinction: the Reader role provides access *after* Traffic Analytics is enabled, not the capability to enable it in the first place. The act of enabling requires more elevated permissions. Why other options are incorrect: A. Yes - This is incorrect because, while the Reader role grants access to *view* data after Traffic Analytics is enabled, it doesn't grant the permissions required to *enable* the feature. The discussion clarifies this crucial difference. Note: There is some disagreement in the discussion regarding the correct answer. Some users believe assigning the Reader role is sufficient, while others argue that it is not. The above answer reflects the more widely supported and arguably more technically accurate interpretation, based on the understanding that "enabling" requires more privileges than "viewing."
220
[View Question](https://www.examtopics.com/discussions/databricks/view/38248-exam-az-104-topic-4-question-49-discussion/) You plan to deploy three Azure virtual machines named VM1, VM2, and VM3. The virtual machines will host a web app named App1. You need to ensure that at least two virtual machines are available if a single Azure datacenter becomes unavailable. What should you deploy? A. all three virtual machines in a single Availability Zone B. all virtual machines in a single Availability Set C. each virtual machine in a separate Availability Zone D. each virtual machine in a separate Availability Set
The correct answer is **C. each virtual machine in a separate Availability Zone**. To ensure high availability across Azure datacenters, the virtual machines must be placed in different Availability Zones. An Availability Zone is a physically separate location within an Azure region. If one datacenter within a region fails, VMs in other Availability Zones within that region will remain operational. Option C guarantees that at least two VMs will survive a single datacenter outage, fulfilling the requirement. Why other options are incorrect: * **A. all three virtual machines in a single Availability Zone:** This option places all VMs in the same physical location. If that zone fails, all three VMs will be unavailable. * **B. all virtual machines in a single Availability Set:** An Availability Set provides high availability *within* a datacenter by distributing VMs across fault and update domains. However, it does not protect against datacenter-level failures. * **D. each virtual machine in a separate Availability Set:** Similar to B, this only provides high availability within a single datacenter. It doesn't protect against a datacenter outage. Note: The discussion shows a consensus on the correct answer being C, although there is some additional discussion clarifying the differences between Availability Sets and Availability Zones.
221
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38264-exam-az-104-topic-4-question-5-discussion/) You have an Azure subscription named Subscription1. Subscription1 contains the resources in the following table. | Resource Group | Resource | Resource Type | |---|---|---| | RG1 | VNet1 | Virtual Network | | RG2 | VNet2 | Virtual Network | | RG1 | VM1 | Virtual Machine | | RG1 | Disk1 | Disk | VNet1 is in RG1. VNet2 is in RG2. There is no connectivity between VNet1 and VNet2. An administrator named Admin1 creates an Azure virtual machine named VM1 in RG1. VM1 uses a disk named Disk1 and connects to VNet1. Admin1 then installs a custom application in VM1. You need to move the custom application to VNet2. The solution must minimize administrative effort. Which two actions should you perform? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** The correct approach involves these two steps: 1. **Delete VM1, keeping the Disk1:** You cannot directly move a VM between VNets. The VM must be deleted. However, the data disk (Disk1) containing the custom application should be retained. 2. **Create a new VM in VNet2 and attach Disk1:** A new VM should be created in the desired VNet (VNet2). During creation, attach the previously retained Disk1 to this new VM. This preserves the custom application with minimal reinstallation effort. **Why other options are incorrect:** The question is a multiple choice hotspot, and the provided text doesn't list alternatives. The suggested answer and discussion emphasize that simply moving a VM between VNets is not possible; this necessitates deletion and recreation. Any solution that attempts a direct move would be incorrect. **Note:** The provided solution reflects a consensus from the discussion. There's no overt disagreement regarding the correct approach, although alternative solutions aren't presented.
222
[View Question](https://www.examtopics.com/discussions/databricks/view/38267-exam-az-104-topic-4-question-71-discussion/) You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an Azure Log Analytics workspace and configure the data settings. You install the Microsoft Monitoring Agent on VM1. You create an alert in Azure Monitor and specify the Log Analytics workspace as the source. Does this meet the goal? A. Yes B. No
A. Yes This solution correctly addresses the problem. The process outlined involves: 1. **Creating a Log Analytics workspace:** This provides the central location for collecting and analyzing logs. 2. **Installing the Microsoft Monitoring Agent (MMA) on VM1:** This agent collects logs from VM1 and sends them to the Log Analytics workspace. It's crucial to clarify that this refers to the installation of the *agent*, not the *extension*. The discussion highlights this distinction as a point of confusion. 3. **Creating an alert in Azure Monitor:** This uses the data in the Log Analytics workspace to trigger an alert when the specified condition (more than two error events within an hour) is met. Therefore, the entire process effectively monitors the System event log on VM1 and generates an alert as required. Why other options are incorrect: B. No: This is incorrect because the described solution accurately fulfills the requirements of creating an alert based on the specified event log conditions. Note: The discussion shows some disagreement on the precise method of installing the MMA (agent vs. extension), however, the core concept and final outcome remain the same: a successful implementation of the monitoring and alerting solution.
223
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38268-exam-az-104-topic-4-question-70-discussion/) You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an Azure Log Analytics workspace and configure the data settings. You add the Microsoft Monitoring Agent VM extension to VM1. You create an alert in Azure Monitor and specify the Log Analytics workspace as the source. Does this meet the goal? A. Yes B. No **
** B. No **Explanation:** The proposed solution is incorrect because it states "You add the Microsoft Monitoring Agent VM extension to VM1". The discussion highlights that this is wrong. The correct approach involves installing the Microsoft Monitoring Agent (MMA) – or more accurately, the Log Analytics Agent – directly onto VM1, not adding it as a VM extension. The Log Analytics agent collects the logs, which are then sent to the Log Analytics workspace. Azure Monitor uses the data in the workspace to generate the alert. The confusion stems from terminology; MMA is sometimes referred to as the Log Analytics Agent and is different from the newer Azure Monitor Agent (AMA). The solution incorrectly uses the term "extension", implying a different installation method than is needed to achieve the goal. **Why other options are incorrect:** * **A. Yes:** This is incorrect because the described method of adding the Microsoft Monitoring Agent as a VM extension is not the correct way to achieve the objective of monitoring the system event log and triggering an alert based on error event counts. The correct method is installing the agent directly onto the VM. **Note:** The discussion shows some disagreement on the precise terminology (Microsoft Monitoring Agent vs. Log Analytics Agent), but the core issue remains: the solution incorrectly suggests adding the agent as an extension rather than installing it directly onto the VM.
224
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38269-exam-az-104-topic-4-question-60-discussion/) HOTSPOT - You have an Azure subscription named Subscription1. Subscription1 contains two Azure virtual machines VM1 and VM2. VM1 and VM2 run Windows Server 2016. VM1 is backed up daily by Azure Backup without using the Azure Backup agent. VM1 is affected by ransomware that encrypts data. You need to restore the latest backup of VM1. To which location can you restore the backup? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0031100001.jpg) *(Image shows a hotspot question with two boxes needing answers)* **
** * **Box 1:** Any Windows computer with internet connectivity. Azure Backup allows restoring individual files from a VM backup to any Windows machine with internet access by downloading and running a recovery tool. The user can then map a network drive to access the recovered files. While the discussion mentions compatibility issues and suggests limiting the answer to VM1 and VM2, the documentation clearly indicates a broader possibility. * **Box 2:** VM1 or a new Azure virtual machine. Azure Backup provides options to restore a complete VM backup either by replacing the existing VM (VM1) or creating a new VM. **Explanation:** The answer is based on the provided discussion and the referenced Microsoft documentation. The discussion highlights the flexibility of restoring individual files to various Windows machines and the options for restoring the entire VM to either the original or a new VM. **Why other options are incorrect:** The discussion acknowledges potential compatibility issues, but the core functionality, as documented by Microsoft, is not limited to only VM1 and VM2. Restoring to a non-Windows machine or a machine without internet access is not supported. **Note:** There is a slight disagreement in the discussion regarding the scope of possible restore locations for the files. The most accurate answer considers the broader capability described in the referenced documentation while noting the potential compatibility concerns mentioned in the user comments.
225
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38270-exam-az-104-topic-4-question-59-discussion/) You have an Azure web app named App1. App1 has the deployment slots shown in the following table: [Image](https://www.examtopics.com/assets/media/exam-media/04223/0030900002.png) In webapp1-test, you test several changes to App1. You back up App1. You swap webapp1-test for webapp1-prod and discover that App1 is experiencing performance issues. You need to revert to the previous version of App1 as quickly as possible. What should you do? A. Redeploy App1 B. Swap the slots C. Clone App1 D. Restore the backup of App1 **
** B. Swap the slots Swapping the slots reverses the previous swap operation, quickly restoring App1 to its previous state (webapp1-prod) before the changes were implemented in webapp1-test. This is the fastest method because it avoids the time-consuming processes of redeployment or restoration. **Why other options are incorrect:** * **A. Redeploy App1:** This is a much slower process and requires rebuilding the application from scratch or from a backup, taking considerable time and resources. * **C. Clone App1:** Cloning creates a copy of the application, which is unnecessary and time-consuming when the previous version already exists in a deployment slot. * **D. Restore the backup of App1:** While this would work, it’s significantly slower than simply swapping the deployment slots. Restoring a backup involves downtime, and is less efficient than the immediate reversal offered by swapping slots. **Note:** The provided solution and explanation aligns with the discussion's suggested answer. There is no conflicting information presented in the discussion.
226
[View Question](https://www.examtopics.com/discussions/databricks/view/38271-exam-az-104-topic-4-question-58-discussion/) You have an Azure subscription named Subscription1 that is used by several departments at your company. Subscription1 contains the resources in the following table: | Resource Group | Resource Type | Resource Name | |---|---|---| | RG1 | Virtual Machine | VM1 | | RG1 | Storage Account | storage2 | | RG2 | Container Instance | container1 | Another administrator deploys a virtual machine named VM1 and an Azure Storage account named storage2 by using a single Azure Resource Manager template. You need to view the template used for the deployment. From which blade can you view the template that was used for the deployment? A. VM1 B. RG1 C. storage2 D. container1
B. RG1 The Azure Resource Manager (ARM) template is associated with the resource group (RG1) that contains both the virtual machine (VM1) and the storage account (storage2). To view the template, you would navigate to the RG1 resource group in the Azure portal. The deployment history within the resource group will show the template used. Why other options are incorrect: * **A. VM1:** The VM1 blade shows information about the virtual machine itself, not the deployment template. * **C. storage2:** The storage2 blade shows information about the storage account, not the deployment template. * **D. container1:** This resource is in a different resource group (RG2) and was not part of the deployment in question. Note: The discussion shows overwhelming agreement on the correct answer, B.
227
[View Question](https://www.examtopics.com/discussions/databricks/view/38272-exam-az-104-topic-4-question-57-discussion/) You have an Azure subscription that contains a virtual machine scale set. The scale set contains four instances that have the following configurations: ✑ Operating system: Windows Server 2016 ✑ Size: Standard_D1_v2 You run the get-azvmss cmdlet. Based on the information presented in the graphic, answer the following questions: **Box 1:** If the administrator changes the size of the virtual machines in the scale set, how many virtual machines will be resized? **Box 2:** If automatic OS upgrades are enabled, how many virtual machines will be updated at once?
**Box 1:** 4 If the administrator changes the size of the VMs in the scale set, all four VMs will be resized simultaneously. This is confirmed by multiple users in the discussion. **Box 2:** 1 While automatic OS updates update 20% of VMs at once, with a minimum of 1, in a scale set of 4 VMs, 20% is 0.8, meaning the minimum of 1 VM will be updated. There is some discussion on whether the answer would be 0 if automatic updates are turned off, but the question assumes they are enabled. There is some disagreement in the discussion regarding the interpretation of "changing the size" versus scaling up/down. However, the consensus and provided answer leans towards all VMs being resized simultaneously when the administrator changes the VM size.
228
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38273-exam-az-104-topic-4-question-56-discussion/) You have an Azure subscription. You have an on-premises virtual machine named VM1. The settings for VM1 are shown in the exhibit. (Click the Exhibit tab.) You need to ensure that you can use the disks attached to VM1 as a template for Azure virtual machines. What should you modify on VM1? **
** C. the hard drive The correct answer is C because to use the disks attached to VM1 as a template for Azure virtual machines, the virtual hard disk (VHD) needs to be in the correct format. While older information might have suggested only VHD format was supported, Azure now supports both VHD and VHDX for Generation 1 and Generation 2 VMs. However, the size and format of the disk are crucial. Converting the hard drive from VHDX to VHD (if necessary) and ensuring it's a fixed-size disk are necessary steps for optimal compatibility and performance when creating a template. **Why other options are incorrect:** * **A. the memory:** Memory size is relevant to the VM's performance, but doesn't directly affect its usability as a template. * **B. the network adapters:** Network configuration is important for the VM's connectivity, but not for using it as a template. * **D. the processor:** The processor type is relevant to the VM's performance but is not a factor when using it as a template. * **E. Integration Services:** Integration Services are unrelated to the process of creating a template from an on-premises VM. **Note:** The discussion shows some disagreement on the exact VHD/VHDX support details and maximum size limits for Azure VMs over time. The answer reflects the most up-to-date information available in the discussion, acknowledging that Azure's support has evolved.
229
[View Question](https://www.examtopics.com/discussions/databricks/view/38274-exam-az-104-topic-4-question-51-discussion/) You have an Azure subscription that contains an Azure virtual machine named VM1. VM1 runs a financial reporting app named App1 that does not support multiple active instances. At the end of each month, CPU usage for VM1 peaks when App1 runs. You need to create a scheduled runbook to increase the processor performance of VM1 at the end of each month. What task should you include in the runbook? A. Add the Azure Performance Diagnostics agent to VM1. B. Modify the VM size property of VM1. C. Add VM1 to a scale set. D. Increase the vCPU quota for the subscription. E. Add a Desired State Configuration (DSC) extension to VM1.
The correct answer is **B. Modify the VM size property of VM1.** This solution directly addresses the problem. The runbook would modify the VM size to a larger size with more vCPUs, thus increasing the processor performance. This action is directly controllable via a scheduled runbook. Why other options are incorrect: * **A. Add the Azure Performance Diagnostics agent to VM1:** This agent monitors performance, it doesn't improve it. The question asks for a task to *increase* performance. * **C. Add VM1 to a scale set:** Scale sets are used for managing multiple instances of a VM. The problem explicitly states that the application does not support multiple active instances. * **D. Increase the vCPU quota for the subscription:** This increases the *overall* quota, but doesn't automatically allocate more vCPUs to VM1. The runbook needs to specifically assign those vCPUs to VM1. * **E. Add a Desired State Configuration (DSC) extension to VM1:** DSC manages the configuration of the VM, not its resources (like vCPU allocation). Note: The provided discussion shows a consensus on answer B.
230
[View Question](https://www.examtopics.com/discussions/databricks/view/38276-exam-az-104-topic-4-question-50-discussion/) You have an Azure virtual machine named VM1 that runs Windows Server 2019. You save VM1 as a template named Template1 to the Azure Resource Manager library. You plan to deploy a virtual machine named VM2 from Template1. What can you configure during the deployment of VM2? A. operating system B. administrator username C. virtual machine size D. resource group
D. resource group Explanation: Based on the discussion, the only setting that needs manual configuration when deploying VM2 from Template1 is the resource group. The operating system, administrator username, and virtual machine size are inherited from the template (Template1). The administrator password *may* need to be changed manually, but the resource group is explicitly stated as requiring manual configuration. Why other options are incorrect: * **A. operating system:** The operating system is defined in the template and is not configurable during deployment from the template. * **B. administrator username:** While the username might be inherited, it is likely the *password* which needs to be changed and not the username itself. The discussion suggests this is also inherited from the template. * **C. virtual machine size:** The virtual machine size is also inherited from the template and cannot be changed during the deployment of VM2. Note: There is some ambiguity in the discussion regarding the administrator password. While one user claims the password needs to be updated manually, the other user indicates that the username/password are pulled from the template. The provided answer prioritizes the explicit mention of the resource group requiring manual configuration.
231
[View Question](https://www.examtopics.com/discussions/databricks/view/38278-exam-az-104-topic-4-question-47-discussion/) You have an Azure subscription named Subscription1. You deploy a Linux virtual machine named VM1 to Subscription1. You need to monitor the metrics and the logs of VM1. What should you use? A. Azure HDInsight B. Linux Diagnostic Extension (LAD) 3.0 C. the AzurePerformanceDiagnostics extension D. Azure Analysis Services
B. Linux Diagnostic Extension (LAD) 3.0 The Linux Diagnostic Extension is designed specifically for monitoring metrics and logs on Linux VMs within Azure. It provides the capability to collect metrics, syslog data, and files, fulfilling the requirements outlined in the question. A is incorrect because Azure HDInsight is a big data analytics service, not a monitoring tool for individual VMs. C is incorrect because the Azure Performance Diagnostics extension is for Windows VMs, not Linux VMs. D is incorrect because Azure Analysis Services is a business intelligence engine; it's not relevant to VM monitoring.
232
[View Question](https://www.examtopics.com/discussions/databricks/view/38279-exam-az-104-topic-4-question-46-discussion/) You have an Azure subscription named Subscription1. Subscription1 contains a resource group named RG1. RG1 contains resources that were deployed by using templates. You need to view the date and time when the resources were created in RG1. Solution: From the RG1 blade, you click Deployments. Does this meet the goal? A. Yes B. No
A. Yes Explanation: Clicking "Deployments" in the RG1 blade in the Azure portal will show a history of deployments for that resource group, including the date and time of each deployment. This directly addresses the need to view the creation date and time of the resources. Why other options are incorrect: B. No: This is incorrect because the "Deployments" view within the Azure portal's resource group blade provides the required information about deployment timestamps. Note: There is disagreement among users regarding the correct answer. Some users believe clicking "Deployments" directly provides the needed information, while others argue that this action is not sufficient to meet the goal. The provided explanation supports the "Yes" answer, acknowledging the presence of conflicting opinions.
233
[View Question](https://www.examtopics.com/discussions/databricks/view/38280-exam-az-104-topic-4-question-45-discussion/) You have an Azure subscription named Subscription1. Subscription1 contains a resource group named RG1. RG1 contains resources that were deployed by using templates. You need to view the date and time when the resources were created in RG1. Solution: From the RG1 blade, you click Automation script. Does this meet the goal? A. Yes B. No
B. No To view the date and time when resources were created in RG1, you should click "Deployments" from the RG1 blade, not "Automation script". The Deployments section shows a history of deployments for the resource group, including timestamps. Clicking "Automation script" would likely show details about any automation scripts used *during* deployment, but not necessarily the exact creation timestamps of the resources themselves. The provided solution is incorrect because it suggests an action (clicking "Automation script") that does not directly provide the required information (creation date and time of resources). There is agreement in the discussion that the correct approach is to use the "Deployments" option.
234
[View Question](https://www.examtopics.com/discussions/databricks/view/38284-exam-az-104-topic-4-question-43-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0028600001.png) You need to configure a proximity placement group for VMSS1. Which proximity placement groups should you use? A. Proximity2 only B. Proximity1, Proximity2, and Proximity3 C. Proximity1 only D. Proximity1 and Proximity3 only
A. Proximity2 only The correct answer is A because VMSS1 is located in the East US region, and only Proximity2 is in the same region (East US). Proximity placement groups work within a specific Azure region to ensure low-latency communication between VMs. Since Proximity1 and Proximity3 are in different regions, they cannot be used for VMSS1. Why other options are incorrect: * **B. Proximity1, Proximity2, and Proximity3:** This is incorrect because Proximity1 and Proximity3 are in different regions than VMSS1. * **C. Proximity1 only:** This is incorrect because Proximity1 is in a different region than VMSS1. * **D. Proximity1 and Proximity3 only:** This is incorrect because both Proximity1 and Proximity3 are in different regions than VMSS1. Note: The provided discussion indicates a consensus on answer A.
235
[View Question](https://www.examtopics.com/discussions/databricks/view/38287-exam-az-104-topic-5-question-3-discussion/) HOTSPOT - You plan to deploy five virtual machines to a virtual network subnet. Each virtual machine will have a public IP address and a private IP address. Each virtual machine requires the same inbound and outbound security rules. What is the minimum number of network interfaces and network security groups that you require? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0035000001.jpg) *(Image contains a hotspot question with boxes for "Number of Network Interfaces" and "Number of Network Security Groups")* Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0035100001.jpg) *(Image shows the suggested answer as 5 Network Interfaces and 1 Network Security Group)*
The correct answer is 5 Network Interfaces and 1 Network Security Group. Each virtual machine needs a network interface to connect to the virtual network. Since you are deploying five virtual machines, you need five network interfaces. A single network interface can have both a public and a private IP address assigned to it. The requirement for the same inbound and outbound security rules for each virtual machine means a single Network Security Group (NSG) can be applied to all five interfaces. Why other options are incorrect: There is no discussion of other options in the provided text, therefore, there is nothing to refute. The consensus amongst the users is that the provided answer is correct. Note: While the provided solution is the accepted answer in the discussion, there may be differing interpretations or nuances related to network configuration not fully explored in the context.
236
[View Question](https://www.examtopics.com/discussions/databricks/view/38288-exam-az-104-topic-4-question-6-discussion/) You download an Azure Resource Manager template based on an existing virtual machine. The template will be used to deploy 100 virtual machines. You need to modify the template to reference an administrative password. You must prevent the password from being stored in plain text. What should you create to store the password? A. an Azure Key Vault and an access policy B. an Azure Storage account and an access policy C. a Recovery Services vault and a backup policy D. Azure Active Directory (AD) Identity Protection and an Azure policy
The correct answer is **A. an Azure Key Vault and an access policy**. Azure Key Vault is designed to securely store and manage sensitive information like passwords, certificates, and encryption keys. Using Key Vault prevents the password from being stored directly in the ARM template, improving security. The access policy controls which users or services can access the stored password, further enhancing security. Why other options are incorrect: * **B. an Azure Storage account and an access policy:** Azure Storage is designed for storing data, not for securing sensitive information like passwords. Storing passwords directly in storage accounts is highly insecure. * **C. a Recovery Services vault and a backup policy:** Recovery Services vaults are used for backup and disaster recovery, not for secure password storage. * **D. Azure Active Directory (AD) Identity Protection and an Azure policy:** While Azure AD Identity Protection is crucial for identity security, it doesn't directly store secrets like passwords. Azure policies manage resource configurations, but not the secure storage of secrets. There is a consensus in the discussion that option A is the correct answer.
237
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38292-exam-az-104-topic-4-question-7-discussion/) You have the App Service plans shown in the following table. | App Service Plan | Location | OS | Pricing Tier | |---|---|---|---| | ASP1 | West US | Windows | Standard | | ASP2 | Central US | Windows | Standard | | ASP3 | West US | Linux | Standard | You plan to create the Azure web apps shown in the following table. | Web App | Runtime Stack | |---|---| | WebApp1 | ASP.NET Core | | WebApp2 | ASP.NET | You need to identify which App Service plans can be used for the web apps. What should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** **Box 1: ASP1 and ASP3** WebApp1 uses ASP.NET Core, which is cross-platform and can run on both Windows and Linux. Therefore, it can be deployed to ASP1 (Windows) or ASP3 (Linux) because both are in the same region (West US) as WebApp1. **Box 2: ASP1** WebApp2 uses ASP.NET, which is only compatible with Windows. Therefore, it can only be deployed to ASP1, as it's the only Windows-based App Service Plan in the same region (West US) as WebApp2. **Why other options are incorrect:** * **ASP2:** Is incorrect for both WebApp1 and WebApp2 because it's located in Central US, a different region than both web apps. Region matching is crucial for App Service Plans. **Note on Disagreement:** There is some discussion in the original post regarding the cross-platform capabilities of ASP.NET vs. ASP.NET Core. The generally accepted answer and the one supported by Microsoft documentation is that ASP.NET Core is cross-platform (Windows and Linux), while ASP.NET is Windows-only. However, some comments in the discussion incorrectly claim ASP.NET is cross-platform. The answer above reflects the correct understanding based on current Microsoft documentation.
238
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38293-exam-az-104-topic-5-question-13-discussion/) You have an Azure subscription that contains the resource groups shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0036600001.png) RG1 contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0036600002.png) VM1 is running and connects to NIC1 and Disk1. NIC1 connects to VNET1. RG2 contains a public IP address named IP2 that is in the East US location. IP2 is not assigned to a virtual machine. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0036700001.jpg) **
** Yes, Yes, No * **Statement 1 (Disk1): Yes.** The storage account (Disk1) can be moved from RG1 to RG2. However, the region remains unchanged (West US). Multiple users confirmed this through testing in a lab environment. * **Statement 2 (NIC1): Yes.** The network interface card (NIC1), even while attached to a running VM (VM1), can be moved from RG1 to RG2. Again, the region remains unchanged (West US). This was verified by several users through lab testing. * **Statement 3 (IP2): No.** The public IP address (IP2) can be moved from RG2 to RG1. However, its location (East US) remains unchanged. This was confirmed by user testing. **Why other options are incorrect:** The provided suggested answer and user testing consistently support the Yes, Yes, No response. There is no evidence supporting any other combination of answers from the provided context. The discussion shows a consensus among multiple users who performed lab tests to verify the behavior.
239
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38296-exam-az-104-topic-3-question-5-discussion/) HOTSPOT - You need to create an Azure Storage account that meets the following requirements: ✑ Minimizes costs ✑ Supports hot, cool, and archive blob tiers ✑ Provides fault tolerance if a disaster affects the Azure region where the account resides How should you complete the command? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0014700004.jpg) ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0014800001.jpg) **
** Box 1: StorageV2 Box 2: Standard_GRS **Explanation:** * **Box 1: StorageV2:** General Purpose v2 (GPv2) storage accounts are selected because they are the only type that supports hot, cool, and archive blob tiers. GPv1 accounts do not support tiering. The question requires support for all three tiers. * **Box 2: Standard_GRS:** Geo-redundant storage (GRS) provides fault tolerance by replicating data across multiple Azure regions. This ensures data availability even if a disaster impacts a single region. This meets the requirement of providing fault tolerance across regions. **Why other options are incorrect:** * **Locally-redundant storage (LRS):** LRS replicates data within a single region. This doesn't meet the requirement for fault tolerance against regional disasters. * **Read-access geo-redundant storage (RA-GRS):** While RA-GRS offers geo-replication and read access to the secondary region, it is more expensive than Standard_GRS. The question specifies minimizing costs, making RA-GRS less suitable. There is no evidence of disagreement in the provided discussion; the suggested answer is unanimously agreed upon.
240
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38302-exam-az-104-topic-3-question-13-discussion/) You have several Azure virtual machines on a virtual network named VNet1. You configure an Azure Storage account as shown in the following exhibit. (Image shows a storage account configuration with a virtual network and firewall rules). Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. (Hot Area image shows two dropdowns, Box 1 and Box 2, each with options "Always", "Never", and "Sometimes"). **
** Box 1: Never. The VMs in the 10.2.9.0/24 subnet are outside the defined subnet (10.2.0.0/24) of VNet1. Therefore, they will never be able to access the storage account unless explicitly allowed through other means (e.g., adding the 10.2.9.0/24 subnet to the storage account's allowed IP ranges). Box 2: Never. The image shows that "Allow trusted Microsoft services to access this storage account" is not checked. Azure Backup, even though it is a trusted Microsoft service, requires this option to be enabled to access a storage account restricted by virtual network and firewall rules. **Why other options are incorrect:** * **"Always" and "Sometimes" for Box 1:** The provided configuration explicitly restricts access to the storage account based on the configured virtual network and its subnet (10.2.0.0/24). The 10.2.9.0/24 subnet is outside this range. Therefore, access from this subnet would never be granted unless explicitly configured otherwise. * **"Always" and "Sometimes" for Box 2:** The Azure Backup service needs explicit permission to access storage accounts protected by virtual networks and firewalls. The fact that the option to allow trusted Microsoft services isn't checked prevents Azure Backup from accessing the account. Therefore, access will never be granted unless the setting is changed. **Note:** The discussion highlights a potential area of disagreement about the relevance of the VNet's address space versus the actual subnet assigned to VMs. However, the provided solution correctly focuses on the subnet assignment of the VMs to determine access.
241
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38305-exam-az-104-topic-4-question-8-discussion/) You create a virtual machine scale set named Scale1. Scale1 is configured as shown in the following exhibit. (Image shows a configuration with "Initial instance count: 4", "Scale out: Increase by 2 if CPU > 80%", "Scale in: Decrease by 4 if CPU < 30%", "Minimum instance count: 2") Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. (Image shows two dropdowns, "Box 1:" and "Box 2:") Box 1: [Dropdown Menu] Box 2: [Dropdown Menu] **
** Box 1: 6 virtual machines Box 2: 2 virtual machines **Explanation:** * **Box 1:** The initial instance count is 4. The scale-out rule increases the number of VMs by 2 if the CPU usage exceeds 80%. Therefore, the maximum number of VMs will be 4 + 2 = 6. * **Box 2:** The scale-in rule decreases the number of VMs by 4 if the CPU usage falls below 30%. However, the minimum instance count is set to 2. Starting with 4 instances, reducing by 4 would result in 0 instances, which is below the minimum. Therefore, the scale-in action will only reduce the instance count to 2. **Why other options are incorrect:** There are no other options explicitly provided in the text. The question is a fill-in-the-blank style using dropdown menus, and the provided answer is based on the given configuration parameters. Any other answer would be incorrect given the stated configuration of the virtual machine scale set. **Note:** The discussion section shows unanimous agreement on this answer.
242
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38313-exam-az-104-topic-3-question-18-discussion/) HOTSPOT - You plan to create an Azure Storage account in the Azure region of East US 2. You need to create a storage account that meets the following requirements: ✑ Replicates synchronously. ✑ Remains available if a single data center in the region fails. How should you configure the storage account? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0017400003.png) *(Image shows two boxes with blanks to be filled)* **
** Box 1: **Zone-redundant storage (ZRS)** Box 2: **StorageV2 (general purpose V2)** ZRS replicates data synchronously across three storage clusters within a single Azure region. This ensures high availability even if a single data center fails. Only General Purpose V2 storage accounts support ZRS. **Why other options are incorrect:** * **LRS (Locally Redundant Storage):** LRS does not provide protection against data center failure within a region. If the data center containing the storage account fails, the data would be unavailable. * **GRS (Geo-Redundant Storage) and RA-GRS (Read-access Geo-Redundant Storage):** These options use asynchronous replication to a secondary region. While they offer redundancy across regions, they don't guarantee synchronous replication and therefore would not meet the requirement of remaining available if a single data center fails. They offer regional redundancy, not data center redundancy within a single region. * **Other storage account types besides General Purpose V2:** While there is some discussion regarding ZRS supporting other storage types (Premium Block Blobs and Premium File Shares), the provided solution and most upvoted answer states that only GPv2 supports ZRS, making that the most likely and reliable answer based on the provided information. **Note:** There is a minor disagreement in the discussion regarding whether ZRS supports storage types beyond General Purpose V2. The answer provided prioritizes the most upvoted response and the suggested answer, but acknowledges the conflicting information.
243
[View Question](https://www.examtopics.com/discussions/databricks/view/38321-exam-az-104-topic-2-question-36-discussion/) You have an Azure subscription that contains a user named User1. You need to ensure that User1 can deploy virtual machines and manage virtual networks. The solution must use the principle of least privilege. Which role-based access control (RBAC) role should you assign to User1? A. Owner B. Virtual Machine Contributor C. Contributor D. Virtual Machine Administrator Login
C. Contributor Explanation: The question specifies that User1 needs to deploy virtual machines *and* manage virtual networks. The "principle of least privilege" dictates assigning only the necessary permissions. While option B (Virtual Machine Contributor) allows VM management, it does *not* provide the necessary permissions to manage virtual networks. Option A (Owner) provides excessive permissions. Option D (Virtual Machine Administrator Login) is not a valid RBAC role. Therefore, the Contributor role (C) is the most appropriate choice as it grants the required permissions for both VMs and virtual networks while adhering to the principle of least privilege. There is significant disagreement in the discussion regarding the correct answer. Many commenters initially favored option B, but the consensus after further discussion and referencing Microsoft documentation leans towards option C due to the requirement to manage both VMs and VNETs. The ambiguity highlights the importance of careful consideration of role definitions when applying the principle of least privilege.
244
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38336-exam-az-104-topic-4-question-37-discussion/) You have Azure subscriptions named Subscription1 and Subscription2. Subscription1 has the following resource groups: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0027600001.png) RG1 includes a web app named App1 in the West Europe location. Subscription2 contains the following resource groups: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0027600002.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0027700001.jpg) **
** Yes, Yes, Yes The correct answer is Yes, Yes, Yes based on the suggested answer image. The provided discussion shows some disagreement on the correct answer; one user suggests Yes, Yes, Yes, while another suggests No, Yes, Yes. The provided suggested answer image supports Yes, Yes, Yes. **Explanation:** The question assesses understanding of Azure resource locks and their impact on resource group movement. Resource locks affect the *resource itself*, not the resource group. A resource can be moved to a resource group with a read-only or delete lock as long as the resource itself doesn't have a lock preventing that action. In this instance, the provided image suggests that the three statements are all true, and thus the answer is Yes, Yes, Yes, as long as the resources themselves don't have any locks that would prevent their move. **Why other options are incorrect (based on the provided information and the suggested answer):** The discussion reveals a conflicting opinion suggesting No, Yes, Yes. Without access to the three statements being evaluated, it is impossible to determine definitively why that alternative answer might be proposed. However, given the suggested answer image in the original post indicating Yes, Yes, Yes, we must assume that answer is correct and that the dissenting viewpoint is incorrect.
245
[View Question](https://www.examtopics.com/discussions/databricks/view/38353-exam-az-104-topic-3-question-6-discussion/) You have an Azure subscription that contains the resources in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0014900001.png) Store1 contains a file share named data. Data contains 5,000 files. You need to synchronize the files in the file share named data to an on-premises server named Server1. Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Create a container instance B. Register Server1 C. Install the Azure File Sync agent on Server1 D. Download an automation script E. Create a sync group
The correct answer is B, C, and E. To synchronize files from an Azure file share to an on-premises server using Azure File Sync, these steps are necessary: * **C. Install the Azure File Sync agent on Server1:** The Azure File Sync agent is required on the on-premises server to enable synchronization with the Azure file share. Without this agent, the server cannot participate in the synchronization process. * **B. Register Server1:** The on-premises server must be registered with the Azure Storage Sync Service. This registration establishes the necessary trust relationship and allows the server to communicate with the service for synchronization. * **E. Create a sync group:** A sync group defines the synchronization relationship between the Azure file share (cloud endpoint) and the on-premises server (server endpoint). This step is crucial to define *what* files are being synced and *how*. Why other options are incorrect: * **A. Create a container instance:** Azure container instances are not relevant to synchronizing files between an Azure file share and an on-premises server. Azure File Sync uses a different mechanism. * **D. Download an automation script:** While automation scripts *could* be used to manage aspects of the process (such as registration), they are not a core requirement for the synchronization itself. The core functionality is provided by the agent and the sync group. Note: The provided solution is based on the user insights and is considered the correct approach to the problem presented. There is no conflicting information presented in the discussion.
246
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38358-exam-az-104-topic-2-question-28-discussion/) HOTSPOT - You have an Azure Active Directory (Azure AD) tenant. You need to create a conditional access policy that requires all users to use multi-factor authentication when they access the Azure portal. Which three settings should you configure? To answer, select the appropriate settings in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0007300001.png) **
** To enforce multi-factor authentication (MFA) for all users accessing the Azure portal, you must configure the following three settings within the Azure Conditional Access policy: 1. **Users and groups:** Select "All users" to include every user in your Azure AD tenant. 2. **Cloud apps or actions:** Select "Azure portal" to specify that this policy applies only when users access the Azure portal. 3. **Grant controls:** Choose "Grant access" and then select "Require multi-factor authentication" to enforce MFA for all access attempts. **Explanation:** The question requires a conditional access policy that applies MFA to *all users* accessing the *Azure portal*. The three selected settings directly address these requirements. "Users and groups" defines who the policy applies to; "Cloud apps or actions" specifies the target application; and "Grant controls" determines the access requirements. **Why other options are incorrect:** The provided images and discussion don't offer alternative options to evaluate. However, any option that excluded users, didn't target the Azure portal specifically, or failed to enforce MFA would be incorrect. The discussion notes that including exclusions (like emergency accounts) *is possible* but not required to answer the core question. Choosing "All cloud apps" (as mentioned in the discussion) would be too broad for this specific scenario. **Note on Disagreement:** While the provided solution is generally accepted, the discussion shows minor variations in implementation details, particularly regarding the inclusion of exclusions for emergency accounts. The core elements remain consistent.
247
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38360-exam-az-104-topic-3-question-3-discussion/) You have an Azure subscription that includes data in the following locations: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0014300004.png) You plan to export data by using an Azure import/export job named Export1. You need to identify the data that can be exported by using Export1. Which data should you identify? A. DB1 B. container1 C. share1 D. Table1 **
** B. container1 The correct answer is B because Azure Import/Export service primarily supports exporting data from Azure Blob storage. `container1` represents a blob container, which is a valid storage type for export. The discussion clearly indicates that while Azure Files (which `share1` likely represents) supports import, it does *not* support export. Other options like DB1 and Table1 are not storage types directly supported by the Azure Import/Export service. There is some disagreement in the discussion regarding Azure Files, with some users reporting it not working, further solidifying the selection of blob storage as the reliable option for export. **Why other options are incorrect:** * **A. DB1:** This likely refers to a database (e.g., SQL Database). Databases are not directly exported via the Azure Import/Export service; data would need to be extracted and then uploaded to blob storage first. * **C. share1:** This likely refers to an Azure File Share. The discussion shows conflicting evidence regarding Azure File Share export functionality. However, the majority consensus and Microsoft documentation confirm that Azure Import/Export does *not* support exporting from Azure File Shares. * **D. Table1:** This could refer to a table in a database or another data store. Similar to DB1, direct export is not supported; data would need to be prepared for export via blob storage first.
248
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38361-exam-az-104-topic-3-question-17-discussion/) You have an on-premises file server named Server1 that runs Windows Server 2016. You have an Azure subscription that contains an Azure file share. You deploy an Azure File Sync Storage Sync Service, and you create a sync group. You need to synchronize files from Server1 to Azure. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0017200001.png) **
** The correct order of actions to synchronize files from Server1 to Azure using Azure File Sync is: 1. **Install the Azure File Sync agent on Server1:** This agent is necessary for Server1 to communicate with and synchronize with the Azure File Sync service. Without this agent, synchronization is impossible. 2. **Register Server1 with the Storage Sync Service:** This establishes the trust relationship between your on-premises server and the Azure service, allowing the service to manage and monitor the synchronization. 3. **Add a server endpoint:** This step creates the endpoint on Server1 that will be included in the sync group. You also need to create a cloud endpoint representing the Azure file share within the same sync group. This establishes the source (Server1) and destination (Azure File Share) for the synchronization. The provided image within the question shows these steps as options, but they may be listed in slightly different phrasing. The sequence and core meaning of the steps are the key to the correct response. **Why other options are incorrect:** The order of these steps is crucial. You cannot register the server or add a server endpoint before the agent is installed because the server won't be able to communicate with the Azure File Sync service. Attempting to perform these steps out of sequence will result in errors. There is no disagreement in the provided discussion regarding the sequence of these steps.
249
[View Question](https://www.examtopics.com/discussions/databricks/view/38362-exam-az-104-topic-3-question-21-discussion/) You have an Azure subscription named Subscription1 that contains the resources shown in the following table. | Resource | Location | Type | |---|---|---| | VM1 | East US | Virtual Machine | | File1 | West Europe | File Share | | SQL1 | West US | SQL Database | | Storage1 | West US | Storage Account | In storage1, you create a blob container named blob1 and a file share named share1. Which resources can be backed up to Vault1 (East US) and Vault2 (West US)? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
**Box 1 (Vault1 - East US): VM1 only** Only VM1 can be backed up to Vault1. The explanation provided states that to back up a VM, the vault must be in the same region as the VM. VM1 is located in East US, matching Vault1's location. File1 and SQL1 are in different regions and therefore cannot be backed up to Vault1. The provided text also explicitly states that blobs cannot be backed up to service vaults. **Box 2 (Vault2 - West US): Share1 only** Only Share1 can be backed up to Vault2. Storage1 (containing Share1) and Vault2 are both in the West US region. The explanation notes that a storage account must be in the same region as the vault or associated with it to allow backup of its resources. The provided text also mentions that only VMs and file shares are allowed for backup in this context. There is some disagreement in the discussion regarding the specifics of what resources can be backed up. While the provided answer focuses on regional limitations and the specific types of Azure resources that are backupable using the service, the discussion suggests potentially other limitations that are not explicitly covered in the provided material. The answer provided here reflects the most supported interpretation based on the given question, image and provided documentation links.
250
[View Question](https://www.examtopics.com/discussions/databricks/view/38363-exam-az-104-topic-3-question-22-discussion/) You have an Azure subscription named Subscription1. You have 5 TB of data that you need to transfer to Subscription1. You plan to use an Azure Import/Export job. What can you use as the destination of the imported data? A. a virtual machine B. an Azure Cosmos DB database C. Azure File Storage D. the Azure File Sync Storage Sync Service
The correct answer is **C. Azure File Storage**. The Azure Import/Export service specifically supports importing large amounts of data to Azure Blob storage and Azure Files. The discussion notes that the question may appear in multiple versions and that Azure Blob Storage is also a valid answer. Given that the question stem specifically refers to 5TB of data and that an Azure Files share's maximum size is 5TB, Azure File Storage is a strong choice. Why other options are incorrect: * **A. a virtual machine:** Azure Import/Export doesn't directly import data into a VM. Data would need to be transferred *to* the VM after being imported into a storage service (like Azure Files or Blob Storage). * **B. an Azure Cosmos DB database:** Azure Cosmos DB is a NoSQL database; the Import/Export service isn't designed to directly populate it. Data would need to be imported into a storage service, then processed and imported into Cosmos DB. * **D. the Azure File Sync Storage Sync Service:** This service synchronizes files, not directly import large datasets using physical drives. Note: The discussion highlights that Azure Blob Storage is also a valid answer in some versions of this question. The best answer depends on the exact phrasing of the question in the exam.
251
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38367-exam-az-104-topic-4-question-33-discussion/) HOTSPOT - You have a pay-as-you-go Azure subscription that contains the virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0027000001.png) You create the budget shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0027100001.jpg) The AG1 action group contains a user named [email protected] only. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0027200001.jpg) **
** **Box 1:** VM1 and VM2 continue to run. **Box 2:** One email notification will be sent each month. **Explanation:** * **Box 1:** The budget is scoped to Resource Group RG1, which only contains VM1. While the budget alert triggers when the 50% threshold (500 Euros) is reached, this only generates a notification. It does *not* automatically stop or affect VM1 or any other resources. Therefore, both VM1 and VM2 continue to run. * **Box 2:** VM1 costs 20 Euros/day. The 50% threshold of the 1000 Euro budget (500 Euros) will be reached in 25 days (500 Euros / 20 Euros/day = 25 days). Since this is within a month, one email notification will be sent to the user specified in the AG1 action group. The 70% and 100% thresholds will not be reached within a month and do not trigger email actions according to the provided information. **Why other options are incorrect:** The question is a hotspot, and the provided answer is the only correct option based on the provided information. There's no mention of actions that would stop VMs or trigger additional notifications beyond what is detailed in the explanation. The question's focus is on interpreting the budget alert settings and their consequences, not the broader Azure VM management capabilities. **Note:** The provided discussion shows consensus on the answer.
252
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38369-exam-az-104-topic-4-question-36-discussion/) You have an Azure Active Directory (Azure AD) tenant named adatum.com that contains the users shown in the following table. | User | Role | |-------|--------------------------| | User1 | Global Administrator | | User2 | Device Administrator | | User3 | User | | User4 | User | Adatum.com has the following configurations: * Users may join devices to Azure AD is set to User1. * Additional local administrators on Azure AD joined devices is set to None. You deploy Windows 10 to a computer named Computer1. User1 joins Computer1 to adatum.com. You need to identify the local Administrator group membership on Computer1. Which users are members of the local Administrators group? A. User1 only B. User2 only C. User1 and User2 only D. User1, User2, and User3 only E. User1, User2, User3, and User4 **
** C. User1 and User2 only. **Explanation:** When a user joins a Windows 10 device to Azure AD, the following accounts are automatically added to the local Administrators group: * The user who performed the Azure AD join (User1 in this case). * Users assigned the Azure AD Global Administrator role (User1). * Users assigned the Azure AD Device Administrator role (User2). Since "Additional local administrators on Azure AD joined devices" is set to None, no other users are added. Therefore, only User1 and User2 are members of the local Administrators group on Computer1. **Why other options are incorrect:** * **A. User1 only:** This is incorrect because User2, having the Device Administrator role, is also automatically added. * **B. User2 only:** This is incorrect because User1, performing the join and being a Global Administrator, is also automatically added. * **D. User1, User2, and User3 only:** This is incorrect because User3 has no special role granting local admin access. * **E. User1, User2, User3, and User4:** This is incorrect because neither User3 nor User4 have roles that grant them local admin access. **Note:** There is some disagreement in the discussion regarding the correct answer. Some participants incorrectly believe only the user performing the join is added to the local administrators group. The provided answer is based on the documented behavior of Azure AD join and the given configuration.
253
[View Question](https://www.examtopics.com/discussions/databricks/view/38372-exam-az-104-topic-5-question-37-discussion/) You have the Azure virtual machines shown in the following table. (Image contains a table of VMs, but the content is not provided in the text. This is important missing context). A DNS service is installed on VM1. You configure the DNS servers settings for each virtual network as shown in the following exhibit. (Image shows a table depicting VNET configurations, but the content is not provided in the text. This is important missing context). You need to ensure that all the virtual machines can resolve DNS names by using the DNS service on VM1. What should you do? A. Configure a conditional forwarder on VM1 B. Add service endpoints on VNET1 C. Add service endpoints on VNET2 and VNET3 D. Configure peering between VNET1, VNET2, and VNET3
D. Configure peering between VNET1, VNET2, and VNET3. Virtual network peering allows VMs in different virtual networks to communicate as if they were on the same network. Since VM1 (hosting the DNS service) is in VNET1, and the other VMs are in VNET2 and VNET3, peering between all three VNETs enables all VMs to reach VM1 and resolve DNS names. Why other options are incorrect: * **A. Configure a conditional forwarder on VM1:** A conditional forwarder is used when resolving names in a different DNS domain. This problem involves VMs within the same Azure environment, needing to reach a DNS server within their environment. A conditional forwarder is not needed in this scenario. * **B. Add service endpoints on VNET1:** Service endpoints enhance security but don't directly solve the network connectivity issue required for the VMs to reach the DNS server on VM1 in a different VNET. * **C. Add service endpoints on VNET2 and VNET3:** Similar to option B, service endpoints do not address the fundamental networking challenge of VMs on different VNETs needing to reach each other. Note: The provided text lacks the crucial visual information (the tables showing VM and VNET configurations). The answer relies on the general understanding of Azure VNET peering and DNS functionality. The accuracy of the answer depends entirely on the assumed content of the missing image data. Without the images, a definitive answer cannot be given.
254
[View Question](https://www.examtopics.com/discussions/databricks/view/38402-exam-az-104-topic-4-question-12-discussion/) You have an Azure Resource Manager template named Template1 that is used to deploy an Azure virtual machine. Template1 contains the following text: ``` { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "myVMName": { "type": "string", "metadata": { "description": "Name of the Virtual Machine" } } }, "variables": { "location": "westeurope" }, "resources": [ { "apiVersion": "2021-03-01", "name": "[parameters('myVMName')]", "type": "Microsoft.Compute/virtualMachines", "location": "[variables('location')]", "properties": { "hardwareProfile": { "vmSize": "Standard_D2_v2" }, } } ] } ``` The variables section in Template1 contains the following text: `"location": "westeurope"` The resources section in Template1 contains the following text: (The image shows the `resources` section provided above.) You need to deploy the virtual machine to the West US location by using Template1. What should you do? A. Modify the location in the resources section to westus B. Select West US during the deployment C. Modify the location in the variables section to westus
The correct answer is **A. Modify the location in the resources section to westus**. The Azure Resource Manager (ARM) template's `resources` section directly defines the location of the virtual machine. The `variables` section provides values that can be referenced, but the final location is determined within the `resources` section. Changing `"location": "[variables('location')]"` to `"location": "westus"` in the `resources` section directly overrides the variable and places the VM in the desired location. Why other options are incorrect: * **B. Select West US during the deployment:** This might be possible with some deployment tools, but it doesn't directly modify the ARM template, which is what the question asks. The question specifically asks what *should be done* to the template itself. * **C. Modify the location in the variables section to westus:** While this changes the value of the `location` variable, the `resources` section uses `[variables('location')]`, meaning it references the variable. Changing the variable will change where future deployments would go but would not affect this specific deployment, as the location is already directly specified in the `resources` section (although using a variable). Note: The discussion shows some disagreement on the best practice of using parameters vs. variables for location. The question focuses on the immediate solution to deploy to West US given the existing template structure. The optimal practice is to parameterize the location for greater flexibility.
255
[View Question](https://www.examtopics.com/discussions/databricks/view/38403-exam-az-104-topic-4-question-11-discussion/) DRAG DROP - You onboard 10 Azure virtual machines to Azure Automation State Configuration. You need to use Azure Automation State Configuration to manage the ongoing consistency of the virtual machine configurations. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0024500001.jpg) ![Image](https://img.examtopics.com/az-104/image916.png)
The correct sequence of actions is: 1. **Upload a configuration to Azure Automation State Configuration:** This involves creating a Desired State Configuration (DSC) configuration file that defines the desired state for your VMs and uploading it to your Azure Automation account. 2. **Compile a configuration into a node configuration:** Azure Automation compiles the DSC configuration into a node configuration, making it ready to be applied to your target VMs. 3. **Check the compliance status of the node:** After applying the node configuration to your VMs, you need to check their compliance status to verify if they meet the defined desired state. **Why other steps are not included in the initial three:** While steps like registering VMs, specifying configuration modes, and assigning node configurations are crucial parts of the overall process, they are not necessarily the *first three* steps you need to take to start managing the ongoing consistency of your VM configurations. The question explicitly asks for the first three actions to manage ongoing consistency; therefore, the initial setup actions are secondary. **Note:** There is some disagreement in the discussion regarding the exact order and the necessity of certain steps. Some users suggest additional steps as being necessary initially. However, the provided answer reflects a common and valid approach based on the understanding that the question requires focusing on immediately establishing consistency.
256
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38424-exam-az-104-topic-2-question-32-discussion/) You have a hybrid deployment of Azure Active Directory (Azure AD) that contains the users shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0008100001.png) You need to modify the JobTitle and UsageLocation attributes for the users. For which users can you modify the attributes from Azure AD? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** The correct answer is nuanced and reflects some disagreement in the discussion. * **JobTitle:** You can only modify the JobTitle attribute for User1 and User3 from Azure AD. User 2's source of authority is Windows Server Active Directory, therefore, changes to JobTitle (and other identity/contact/job info) must be made in Windows Server Active Directory. * **UsageLocation:** You can modify the UsageLocation attribute for all users (User1, User2, and User3) from Azure AD. This is because UsageLocation is an Azure AD property, regardless of the user's source of authority. The discussion shows some disagreement regarding the UsageLocation attribute. One commenter suggests that in specific hybrid configurations (with hybrid Exchange and appropriate on-premises attributes), UsageLocation might be automatically populated, implying that direct modification might not always be necessary. Another commenter believes UsageLocation can only be modified from Azure AD for User1. However, the prevailing and generally accepted view aligns with the answer provided above. **Why other options are incorrect:** Options that only include User1 for UsageLocation are incorrect because UsageLocation is an Azure AD property and modifiable for all users within Azure AD, regardless of whether their account originates from on-premises Active Directory. Options that include User2 for JobTitle modification are also incorrect because User2's account originates from on-premises Active Directory; changes to JobTitle require modification within the on-premises system.
257
[View Question](https://www.examtopics.com/discussions/databricks/view/38429-exam-az-104-topic-5-question-7-discussion/) You have the Azure virtual networks shown in the following table. | Virtual Network | Address Space | |---|---| | VNet1 | 10.11.0.0/16 | | VNet2 | 10.11.0.0/17 | | VNet3 | 10.10.0.0/22 | | VNet4 | 192.168.16.0/22 | To which virtual networks can you establish a peering connection from VNet1? A. VNet2 and VNet3 only B. VNet2 only C. VNet3 and VNet4 only D. VNet2, VNet3, and VNet4
C. VNet3 and VNet4 only Explanation: Azure virtual network peering requires that the address spaces of the peered networks do not overlap. VNet1 (10.11.0.0/16) and VNet2 (10.11.0.0/17) have overlapping address spaces. Therefore, peering between VNet1 and VNet2 is not possible. VNet1 and VNet3 (10.10.0.0/22) and VNet1 and VNet4 (192.168.16.0/22) do not have overlapping address spaces, allowing peering. Why other options are incorrect: * **A. VNet2 and VNet3 only:** Incorrect because VNet1 and VNet2 have overlapping IP address spaces, preventing peering. * **B. VNet2 only:** Incorrect because VNet1 and VNet2 have overlapping IP address spaces. * **D. VNet2, VNet3, and VNet4:** Incorrect because VNet1 and VNet2 have overlapping IP address spaces. Note: The discussion section shows a consensus on the correct answer.
258
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38431-exam-az-104-topic-5-question-17-discussion/) You have an Azure subscription that contains the following resources: ✑ A virtual network that has a subnet named Subnet1 ✑ Two network security groups (NSGs) named NSG-VM1 and NSG-Subnet1 ✑ A virtual machine named VM1 that has the required Windows Server configurations to allow Remote Desktop connections NSG-Subnet1 has the default inbound security rules only. NSG-VM1 has the default inbound security rules and the following custom inbound security rule: ✑ Priority: 100 ✑ Source: Any ✑ Source port range: * ✑ Destination: * ✑ Destination port range: 3389 ✑ Protocol: UDP ✑ Action: Allow VM1 has a public IP address and is connected to Subnet1. NSG-VM1 is associated to the network interface of VM1. NSG-Subnet1 is associated to Subnet1. You need to be able to establish Remote Desktop connections from the internet to VM1. Solution: You add an inbound security rule to NSG-Subnet1 that allows connections from the Any source to the *destination for port range 3389 and uses the TCP protocol. You remove NSG-VM1 from the network interface of VM1. Does this meet the goal? A. Yes B. No **
** A. Yes The solution meets the goal. Remote Desktop uses TCP port 3389. By adding an inbound security rule to NSG-Subnet1 that allows TCP traffic on port 3389 from any source, and removing NSG-VM1 (which only allowed UDP traffic on port 3389), incoming RDP connections will be permitted. The network traffic will be processed by the subnet's NSG, allowing the connection. **Why other options are incorrect:** B. No is incorrect because the proposed solution correctly configures the network security to allow RDP connections. The removal of the conflicting NSG-VM1 and the addition of the correct rule to NSG-Subnet1 directly addresses the issue. **Note:** The discussion shows some initial disagreement regarding the correct answer. However, the consensus ultimately supports "Yes" as the correct answer based on the understanding that the correct TCP port and NSG configuration are implemented and that only one NSG should apply. There's some confusion introduced by the existence of potentially other, similar questions with different, potentially correct answers, highlighting the importance of carefully considering the specific scenario detailed in each question.
259
[View Question](https://www.examtopics.com/discussions/databricks/view/38432-exam-az-104-topic-5-question-20-discussion/) You have a virtual network named VNet1 that has the configuration shown in the following exhibit. (Image shows a VNet with address space 10.2.0.0/16 and a subnet named 'default' with address range 10.2.0.0/24). Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. To assign the IP address 10.2.1.0/24 to a VM in VNet1, what two actions are needed? Box 1: [Dropdown options not provided in text, but implied to be "Add an address space" and "Add a subnet"] Box 2: [Dropdown options not provided in text, but implied to be "Add an address space" and "Add a subnet"]
Box 1: **Add a subnet** Box 2: **Add a subnet** Explanation: The existing VNet1 already has an address space (10.2.0.0/16) that encompasses the desired IP range (10.2.1.0/24). However, the 10.2.1.0/24 range is not part of the existing 'default' subnet (10.2.0.0/24). To use 10.2.1.0/24, a new subnet needs to be created within the existing address space. Adding another address space is unnecessary. There is some disagreement in the discussion regarding this; some users initially thought an additional address space was needed, but later corrected themselves to recognize the need for a new subnet. Why other options are incorrect: Adding an address space is not necessary because the IP address range 10.2.1.0/24 falls within the already existing address space 10.2.0.0/16. Only adding a new subnet will allow assignment of the IP address range.
260
[View Question](https://www.examtopics.com/discussions/databricks/view/38433-exam-az-104-topic-5-question-21-discussion/) **** You have an Azure subscription that contains a virtual network named VNET1. VNET1 contains the subnets shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0037800001.png) Each virtual machine uses a static IP address. You need to create network security groups (NSGs) to meet following requirements: ✑ Allow web requests from the internet to VM3, VM4, VM5, and VM6. ✑ Allow all connections between VM1 and VM2. ✑ Allow Remote Desktop connections to VM1. ✑ Prevent all other network traffic to VNET1. What is the minimum number of NSGs you should create? A. 1 B. 3 C. 4 D. 12 **
** A. 1 You only need to create one NSG. NSGs can be associated with subnets, and a single NSG can be applied to multiple subnets. The requirements can be met by configuring rules within a single NSG: * **Allow web requests from the internet to VM3, VM4, VM5, and VM6:** Add inbound rules to the NSG allowing TCP port 80 (HTTP) traffic to the specific static IP addresses of VM3, VM4, VM5, and VM6. * **Allow all connections between VM1 and VM2:** This is allowed by default within the same VNet without requiring any NSG configuration. Communication within the same VNet is permitted unless explicitly blocked by an NSG rule. * **Allow Remote Desktop connections to VM1:** Add an inbound rule to the NSG allowing TCP port 3389 (RDP) to the static IP address of VM1. * **Prevent all other network traffic to VNET1:** This is implicitly handled by the default deny-all rule inherent in NSGs. Unless explicitly allowed, all other traffic will be blocked. **Why other options are incorrect:** * **B, C, D:** Creating multiple NSGs is unnecessary and inefficient for this scenario. A single NSG can effectively manage all the required rules. **Note:** The discussion section reveals disagreement on the correct answer. Some commenters believe that multiple NSGs are necessary to prevent unintended access, while the highest-voted answer asserts that a single NSG is sufficient. The answer provided above reflects the most efficient and commonly accepted approach using a single NSG with properly configured rules.
261
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38434-exam-az-104-topic-2-question-29-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.onmicrosoft.com. The User administrator role is assigned to a user named Admin1. An external partner has a Microsoft account that uses the [email protected] sign in. Admin1 attempts to invite the external partner to sign in to the Azure AD tenant and receives the following error message: `Unable to invite user [email protected] "Generic authorization exception.` You need to ensure that Admin1 can invite the external partner to sign in to the Azure AD tenant. What should you do? A. From the Users settings blade, modify the External collaboration settings. B. From the Custom domain names blade, add a custom domain. C. From the Organizational relationships blade, add an identity provider. D. From the Roles and administrators blade, assign the Security administrator role to Admin1. **
** A. From the Users settings blade, modify the External collaboration settings. The error message "Unable to invite user… Generic authorization exception" indicates a problem with the permissions or settings related to inviting external users. Modifying the External collaboration settings within the Users settings blade allows you to configure who can invite guest users and what their access rights will be. This directly addresses the issue preventing Admin1 from inviting the external partner. **Why other options are incorrect:** * **B. From the Custom domain names blade, add a custom domain:** Adding a custom domain affects the tenant's branding and email addresses, but it doesn't directly solve the problem of inviting external users. * **C. From the Organizational relationships blade, add an identity provider:** This option is related to federated identities and is not relevant to inviting a guest user with a Microsoft account. * **D. From the Roles and administrators blade, assign the Security administrator role to Admin1:** While the Security Administrator role has broader permissions, the User Administrator role should already have sufficient privileges to invite guest users. The issue lies in the configuration of external collaboration settings, not the Admin1's role. **Note:** There is some disagreement in the discussion regarding the correct answer. While the overwhelming majority support option A, one user suggests option D. The provided explanation prioritizes the most commonly supported and logically sound solution based on the error message and common Azure AD functionality.
262
[View Question](https://www.examtopics.com/discussions/databricks/view/38446-exam-az-104-topic-5-question-42-discussion/) HOTSPOT - You have peering configured as shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0041300001.jpg) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0041300002.jpg) Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0041400001.jpg)
**Box 1:** vNET6 only **Box 2:** delete peering1 **Explanation:** The image shows that peering between VNET6 and VNET1/VNET2 is in a disconnected state. Therefore, only communication within VNET6 itself is possible. To resolve the disconnected peering with VNET1, deleting `peering1` is necessary. The provided discussion explains that you cannot modify address spaces in peered VNets; you must delete the peering, make the address space changes, and then recreate the peering connection. **Why other options are incorrect:** The question is a hotspot requiring specific selections. There are no other options explicitly given in the provided text; the correct answers are derived from interpreting the diagram and the suggested answer. Note: The discussion shows some consensus on the answer. There is no significant conflicting opinion presented.
263
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38454-exam-az-104-topic-5-question-96-discussion/) You have a virtual network named VNET1 that contains the subnets shown in the following table: |Subnet Name|Subnet Address| |---|---| |Subnet1|10.0.1.0/24| |Subnet2|10.0.2.0/24| You have Azure virtual machines that have the network configurations shown in the following table: |VM Name|NIC Name|Subnet|NSG|IP Address| |---|---|---|---|---| |VM1|NIC1|Subnet1|NSG2|10.0.1.4| |VM2|NIC2|Subnet2|NSG1|10.0.2.4| For NSG1, you create the inbound security rule shown in the following table: |Name|Priority|Protocol|Source|Source Port|Destination|Destination Port|Action| |---|---|---|---|---|---|---|---| |Rule1|100|TCP|10.0.2.0/24|3389|10.0.1.0/24|3389|Allow| For NSG2, you create the inbound security rule shown in the following table: |Name|Priority|Protocol|Source|Source Port|Destination|Destination Port|Action| |---|---|---|---|---|---|---|---| |Rule2|110|TCP|10.0.1.0/24|3389|10.0.1.0/24|3389|Deny| For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. VM1 can access VM2 on port 3389. 2. VM2 can access VM1 on port 3389. 3. VM1 can access itself on port 3389.
No, No, No. VM1 (10.0.1.4) is on Subnet1 and has NSG2 applied to its NIC. NSG2's Rule2 denies inbound TCP traffic on port 3389 from within Subnet1 (10.0.1.0/24). This means VM1 cannot access itself on port 3389, thus statement 3 is No. VM2 (10.0.2.4) is on Subnet2 and has NSG1 applied to the subnet. NSG1's Rule1 allows inbound TCP traffic on port 3389 from Subnet2 (10.0.2.0/24) to Subnet1 (10.0.1.0/24). However, even though the traffic passes NSG1, it is then blocked by NSG2 on VM1's NIC. Therefore, VM2 cannot access VM1 on port 3389, making statement 2 No. Regarding VM1 accessing VM2, even though NSG2 on VM1's NIC does not explicitly block outbound traffic, the communication attempt is blocked because VM2's NSG1 only allows inbound traffic from its own subnet. VM1 is not in that subnet. Therefore, statement 1 is No. The discussion shows disagreement on the interpretation of how Network Security Groups (NSGs) interact when one is applied to a subnet and the other to a NIC. Some users believe NSGs are evaluated independently; other users believe that subnet NSGs have precedence over NIC NSGs. The answer provided reflects the consensus that the NSGs are evaluated independently and in the order of the path of the connection.
264
[View Question](https://www.examtopics.com/discussions/databricks/view/38457-exam-az-104-topic-2-question-21-discussion/) DRAG DROP - You have an Azure subscription that is used by four departments in your company. The subscription contains 10 resource groups. Each department uses resources in several resource groups. You need to send a report to the finance department. The report must detail the costs for each department. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0006100001.png)
The correct sequence of actions is: 1. **Assign a tag to each resource:** Each resource (or resource group) needs to be tagged with the corresponding department. This allows for grouping and filtering based on department in the cost analysis. Tags are key-value pairs; the key might be "Department" and the value could be "Sales," "Marketing," etc. 2. **From the Cost analysis blade, filter the view by tag:** After tagging, you use the Cost Analysis blade in the Azure portal to filter costs based on the department tags assigned in the previous step. This isolates the cost for each department. 3. **Download the usage report:** Once the cost view is filtered by department, download the report (likely as a CSV file) for submission to the finance department. Other Options (Why they are incorrect or not in the right order): The order is crucial. You must tag resources *before* you can filter by those tags. Downloading the report before filtering would provide an unfiltered, unusable report. Note: The provided solution is based on the text and image from the provided URL. There's no explicit disagreement in the discussion, only the suggested answer.
265
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38548-exam-az-104-topic-4-question-10-discussion/) HOTSPOT - You have an Azure Kubernetes Service (AKS) cluster named AKS1 and a computer named Computer1 that runs Windows 10. Computer1 that has the Azure CLI installed. You need to install the kubectl client on Computer1. Which command should you run? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** The command to install the kubectl client on Computer1 is `az aks install-cli`. The Azure CLI command `az aks install-cli` specifically installs the kubectl command-line tool. The discussion consistently points to this as the correct solution. Using `Install-Module` (a PowerShell command) is incorrect in this context because the question specifies the use of the Azure CLI. **Why other options are incorrect:** The discussion doesn't provide alternative options, but implicitly indicates that any command not starting with `az` (the Azure CLI prefix) and not specifically designed for installing kubectl within the Azure CLI framework would be incorrect. `Install-Module` is explicitly mentioned as incorrect because it's a PowerShell command, not an Azure CLI command. **Note:** While the discussion overwhelmingly supports `az aks install-cli`, one comment mentions that Kubernetes is no longer part of the AZ-104 exam at the time of their comment. This suggests that the relevance of this question may depend on the exam version.
266
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38599-exam-az-104-topic-5-question-5-discussion/) HOTSPOT - You have Azure virtual machines that run Windows Server 2019 and are configured as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0035300001.png) *(Image shows a table with two VMs, VM1 and VM2, each with a private IP address and a different DNS suffix. VM1 has a DNS suffix of adatum.com, VM2 has a different DNS suffix.)* You create a private Azure DNS zone named adatum.com. You configure the adatum.com zone to allow auto-registration from VNET1. Which A records will be added to the adatum.com zone for each virtual machine? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0035400001.jpg) *(Image shows a hotspot answer area with two boxes, one for VM1 and one for VM2.)* **
** The A records added to the adatum.com zone for both VM1 and VM2 will be their respective private IP addresses. For VM1 this is 10.1.0.4, and for VM2 this is 10.1.0.5. **Explanation:** Auto-registration in a private Azure DNS zone uses the private IP addresses of virtual machines within linked virtual networks. The DNS suffix configured on the VMs themselves (adatum.com for VM1, a different suffix for VM2) does not affect this. Because both VMs are in VNET1, and VNET1 is linked to the adatum.com private DNS zone, both VMs will register their private IPs as A records in the adatum.com zone. **Why other options are incorrect:** There is no indication in the provided material that anything other than the private IP address would be registered. The discussion does contain some slightly ambiguous statements, but the consensus and supported documentation point to the private IP address as the only value added. **Note:** While the discussion shows a general consensus, there's some initial confusion in the discussion about the effect of the differing DNS suffixes. However, the final resolution is clear: only the private IP addresses are used for registration in this scenario.
267
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38632-exam-az-104-topic-3-question-12-discussion/) You have an Azure File sync group that has the endpoints shown in the following table. | Endpoint | Server | Type | Cloud Tiering | |---|---|---|---| | Endpoint1 | Server1 | Cloud | No | | Endpoint2 | Server2 | Server | No | | Endpoint3 | Server3 | Server | Yes | Cloud tiering is enabled for Endpoint3. You add a file named File1 to Endpoint1 and a file named File2 to Endpoint2. On which endpoints will File1 and File2 be available within 24 hours of adding the files? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** * **File1:** Endpoint1 only. Since File1 is added to a cloud endpoint (Endpoint1) and cloud tiering is not enabled, the file will only be available on Endpoint1 within 24 hours. Azure File Sync's synchronization isn't instantaneous; there's a scan/sync process. For cloud endpoints, this takes up to 24 hours. * **File2:** Endpoint1, Endpoint2, and Endpoint3. File2, added to an on-premises server endpoint (Endpoint2), will be synchronized to other endpoints in the sync group more quickly than from a cloud endpoint. The on-premises servers are automatically synced. Therefore, within 24 hours, File2 will be available on all three endpoints. **Why other options are incorrect:** The discussion highlights a potential discrepancy between the original exam question and the version presented. The user "mlantonis" suggests that the exam question was modified to ask for availability "after 24 hours" instead of "within 24 hours." If the question was changed to "after 24 hours," the answer would be that File1 and File2 would be available on all endpoints (Endpoint1, Endpoint2, and Endpoint3). My answer reflects the original phrasing of "within 24 hours," acknowledging the conflicting information provided.
268
[View Question](https://www.examtopics.com/discussions/databricks/view/38671-exam-az-104-topic-4-question-22-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0025600001.png) VM1 connects to VNET1. You need to connect VM1 to VNET2. Solution: You turn off VM1, and then you add a new network interface to VM1. Does this meet the goal? A. Yes B. No
B. No This solution does not meet the goal. While you can add multiple network interfaces to a VM, a VM is inherently tied to a single VNet during creation. Adding a new network interface to VM1, even after turning it off, will not move it to VNET2. The VM remains associated with VNET1. To connect VM1 to VNET2, you must delete VM1 and recreate it within VNET2, specifying the desired network interface during the recreation process. The suggested answer reflects the consensus in the discussion. The discussion highlights that changing the VNet of an existing VM is not possible; you must recreate the VM in the target VNet.
269
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38672-exam-az-104-topic-4-question-21-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0025400001.png) VM1 connects to VNET1. You need to connect VM1 to VNET2. Solution: You delete VM1. You recreate VM1, and then you create a new network interface for VM1 and connect it to VNET2. Does this meet the goal? A. Yes B. No **
** A. Yes The suggested solution correctly addresses the need to connect VM1 to VNET2. Because you cannot directly change the VNET associated with an Azure VM after creation, deleting and recreating the VM with a new network interface attached to VNET2 is the necessary approach. **Why other options are incorrect:** B. No: This is incorrect because the proposed solution effectively moves VM1 to VNET2. The method, while disruptive, is a valid and often necessary solution within Azure's architecture limitations. **Note:** While the provided solution is correct based on the limitations described in Azure's VM networking, there might be alternative approaches depending on specific Azure services and configurations not mentioned in the provided text. The discussion highlights that changing the VNET of an existing VM is not directly supported.
270
[View Question](https://www.examtopics.com/discussions/databricks/view/38674-exam-az-104-topic-4-question-20-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0025300001.png) VM1 connects to VNET1. You need to connect VM1 to VNET2. Solution: You move VM1 to RG2, and then you add a new network interface to VM1. Does this meet the goal? A. Yes B. No
B. No Explanation: The provided solution of moving VM1 to RG2 and adding a new network interface does *not* connect VM1 to VNET2. You cannot change the VNet of an existing VM; it's a fundamental limitation. To connect VM1 to VNET2, you must delete VM1 and recreate it within RG2, associating it with a new network interface connected to VNET2 during the recreation process. The discussion highlights this as the only solution. Why other options are incorrect: A. Yes is incorrect because it's impossible to change a VM's associated VNet after creation. Adding a new network interface does not change the primary VNet association. Note: There is some disagreement in the discussion regarding the interpretation of the question's goal ("connect" vs. "move"). However, the consensus and the underlying Azure functionality confirm that changing the VNet requires VM deletion and recreation.
271
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38679-exam-az-104-topic-5-question-40-discussion/) You have an Azure subscription that contains the resources in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0041100001.png) Subnet1 is associated to VNet1. NIC1 attaches VM1 to Subnet1. You need to apply ASG1 to VM1. What should you do? A. Associate NIC1 to ASG1 B. Modify the properties of ASG1 C. Modify the properties of NSG1 **
** A. Associate NIC1 to ASG1 To apply an Application Security Group (ASG) to a virtual machine (VM), you need to associate the VM's Network Interface Card (NIC) with the ASG. The ASG acts as a logical grouping of VMs, and associating the NIC adds the VM to that group. Network Security Groups (NSGs) then use these ASGs as a criteria in their rules. Modifying the properties of the ASG or NSG won't directly apply the ASG to the VM. **Why other options are incorrect:** * **B. Modify the properties of ASG1:** Modifying the ASG's properties only changes its configuration, such as adding or removing rules, but doesn't link it to a specific VM. The VM still needs to be explicitly associated. * **C. Modify the properties of NSG1:** The NSG's rules are where you specify what traffic is allowed, but it needs to reference the ASG (which the VM needs to be a member of) to work as intended. Changing NSG rules does not directly associate the ASG with the VM. **Note:** The provided discussion shows unanimous agreement on the correct answer.
272
[View Question](https://www.examtopics.com/discussions/databricks/view/38684-exam-az-104-topic-4-question-18-discussion/) You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the Update management blade, you click Enable. Does this meet the goal? A. Yes B. No
B. No. Enabling updates from the Update Management blade does not immediately move a VM to a different host. To move VM1 to a different host immediately, you need to redeploy it. The consensus among the discussion participants is that redeployment is the necessary action. WHY OTHER OPTIONS ARE INCORRECT: A. Yes is incorrect because enabling updates in the Update Management blade only manages updates and doesn't initiate a redeployment to a new host. This is explicitly stated in multiple responses within the discussion thread which cite the need to redeploy the VM as the correct solution.
273
[View Question](https://www.examtopics.com/discussions/databricks/view/38697-exam-az-104-topic-4-question-19-discussion/) You have an Azure subscription that contains a web app named webapp1. You need to add a custom domain named www.contoso.com to webapp1. What should you do first? A. Create a DNS record B. Add a connection string C. Upload a certificate D. Stop webapp1
A. Create a DNS record. Before you can associate a custom domain (www.contoso.com) with your Azure web app (webapp1), you must first create a corresponding DNS record with your domain registrar. This record maps your custom domain to your Azure web app's IP address or a CNAME alias. The specific type of DNS record (A record or CNAME record) depends on whether it's a root domain or subdomain; for www.contoso.com, a CNAME record is generally preferred. Why other options are incorrect: * **B. Add a connection string:** Connection strings are used to configure database access for your web app; this is unrelated to adding a custom domain. * **C. Upload a certificate:** While you will likely need an SSL certificate later to secure the connection to your custom domain, creating the DNS record comes first. The DNS record is essential to direct traffic to your web app. * **D. Stop webapp1:** Stopping the web app is unnecessary and not a prerequisite for adding a custom domain. Note: The discussion shows a consensus agreement that creating a DNS record is the first step.
274
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38698-exam-az-104-topic-4-question-24-discussion/) You have an Azure subscription that contains an Azure Availability Set named WEBPROD-AS-USE2. You add 14 virtual machines to WEBPROD-AS-USE2. Based on the information provided, what is the maximum number of VMs that will be offline during an update (Box 1)? What is the maximum number of VMs that will be offline due to a single fault domain failure (Box 2)? (Note: Images depicting the Availability Set and a multiple-choice answer box are omitted here but were present in the original question.) **
** Box 1: 2 Box 2: 7 **Explanation:** * **Box 1:** Azure Availability Sets distribute VMs across update domains. Only one update domain is updated at a time. With 14 VMs and 10 update domains, the VMs are distributed unevenly across the update domains. The most VMs in any single update domain determines the maximum number offline during an update. The most efficient distribution would be 4 update domains with 2 VMs and 6 update domains with 1 VM. Therefore, the maximum number of VMs offline during an update is 2. * **Box 2:** Azure Availability Sets also distribute VMs across fault domains to protect against hardware failures. A fault domain typically represents a physical rack in a datacenter. With 14 VMs and 2 fault domains, there will be 7 VMs per fault domain (14 VMs / 2 fault domains = 7 VMs/fault domain). A failure in a single fault domain (rack) will result in the loss of all 7 VMs within that domain. Therefore, the maximum number of VMs offline due to a single fault domain failure is 7. **Why other options are incorrect:** The provided question and answer only allows for the single correct answer as stated above. Any other number for Box 1 or Box 2 would incorrectly reflect the distribution of VMs across update and fault domains, respectively. The user insights section confirms this answer and provides reasoning. There is no disagreement reflected in the discussion provided.
275
[View Question](https://www.examtopics.com/discussions/databricks/view/38700-exam-az-104-topic-4-question-23-discussion/) You have an Azure subscription named Subscription1 that contains the quotas shown in the following table. (Image shows a table with a quota of 20 Total regional vCPUs). You deploy virtual machines to Subscription1 as shown in the following table. (Image shows VM1 with 2 vCPUs and VM20 with 16 vCPUs). You plan to deploy the virtual machines shown in the following table. (Image shows VM3 with 1 vCPU, VM4 with 4 vCPUs, and VM5 with 16 vCPUs). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Can you add the following virtual machines without exceeding the quota? * VM3 (1 vCPU) * VM4 (4 vCPUs) * VM5 (16 vCPUs)
The correct answer is Yes, No, No. * **VM3 (1 vCPU): Yes.** Adding VM3 (1 vCPU) to the existing VMs (VM1: 2 vCPUs + VM20: 16 vCPUs) results in a total of 19 vCPUs, which is within the 20 vCPU quota. * **VM4 (4 vCPUs): No.** Adding VM4 (4 vCPUs) would result in a total of 22 vCPUs (2 + 16 + 4), exceeding the 20 vCPU quota. * **VM5 (16 vCPUs): No.** Adding VM5 (16 vCPUs) would result in a total of 34 vCPUs (2 + 16 + 16), significantly exceeding the 20 vCPU quota. The quota includes both allocated and deallocated VMs. Therefore, the 16 vCPUs from the deallocated VM20 are considered in the calculation. There is agreement among users in the discussion that this is the correct answer and reasoning. One user even verified this in a lab environment. The discussion also clarifies that the quota applies to total vCPUs regardless of VM family (B-series or D-series).
276
[View Question](https://www.examtopics.com/discussions/databricks/view/38733-exam-az-104-topic-2-question-24-discussion/) You have an Azure subscription named Subscription1. Subscription1 contains the resource groups in the following table. | Resource Group | Location | Policy | |---|---|---| | RG1 | West Europe | Policy1 | | RG2 | North Europe | Policy2 | RG1 has a web app named WebApp1. WebApp1 is located in West Europe. You move WebApp1 to RG2. What is the effect of the move? A. The App Service plan for WebApp1 remains in West Europe. Policy2 applies to WebApp1. B. The App Service plan for WebApp1 moves to North Europe. Policy2 applies to WebApp1. C. The App Service plan for WebApp1 remains in West Europe. Policy1 applies to WebApp1. D. The App Service plan for WebApp1 moves to North Europe. Policy1 applies to WebApp1.
A. The App Service plan for WebApp1 remains in West Europe. Policy2 applies to WebApp1. Moving a resource between resource groups within the same Azure subscription does not change its location. The web app will remain in West Europe. However, the resource group's policies apply to the resources within it. Since WebApp1 is moved to RG2, Policy2 will now govern it. Other Options: * **B and D:** These options are incorrect because moving a resource between resource groups does *not* automatically change its location (region). The web app's location remains consistent. * **C:** This option is incorrect because the resource group's policy changes when the resource is moved. WebApp1 will be subject to the policies of its new resource group (RG2), which is Policy2. Note: There is a disagreement in the discussion regarding the process of moving a web app between regions. The provided answer reflects the effect of moving within the same subscription, focusing only on the information given in the question. The discussion suggests a more complex process for moving between regions, which is outside the scope of the original question.
277
[View Question](https://www.examtopics.com/discussions/databricks/view/38740-exam-az-104-topic-5-question-90-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table: | VM Name | OS | Subnet | NSG | Public IP | |---------|------------|---------|---------|------------| | VM1 | Windows | Subnet1 | NSG1 | Yes | | VM2 | Windows | Subnet2 | NSG2 | Yes | VM1 and VM2 use public IP addresses. From Windows Server 2019 on VM1 and VM2, you allow inbound Remote Desktop connections. Subnet1 and Subnet2 are in a virtual network named VNET1. The subscription contains two network security groups (NSGs) named NSG1 and NSG2. NSG1 uses only the default rules. NSG2 uses the default rules and the following custom incoming rule: Priority: 100 Name: Rule1 Port: 3389 Protocol: TCP Source: Any Destination: Any Action: Allow NSG1 is associated to Subnet1. NSG2 is associated to the network interface of VM2. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 1. You can establish a Remote Desktop connection to VM1 from the internet. 2. You can establish a Remote Desktop connection to VM2 from the internet. 3. VM1 and VM2 can communicate with each other.
No, Yes, Yes. 1. **No:** VM1 is associated with NSG1, which only uses default rules. Default NSG rules generally deny inbound traffic unless explicitly allowed. Since there's no rule explicitly allowing RDP (port 3389) inbound, a remote desktop connection from the internet will fail. 2. **Yes:** VM2 is associated with NSG2, which includes a custom rule (Rule1) explicitly allowing inbound TCP traffic on port 3389 (RDP) from any source. Therefore, a remote desktop connection from the internet is allowed. 3. **Yes:** VM1 and VM2 are in the same virtual network (VNET1). By default, virtual machines within the same VNET can communicate with each other without needing additional network configuration, unless explicitly blocked by NSGs or other network configurations. Since neither NSG is blocking inter-VM communication within VNET1, they can communicate. Note: The discussion shows a disagreement on the answer. While the provided answer is generally accepted and logically sound based on standard Azure NSG behavior, there might be specific configurations or unusual scenarios not presented that could lead to different results. The explanation provided assumes the standard default behavior of Azure NSGs.
278
[View Question](https://www.examtopics.com/discussions/databricks/view/38767-exam-az-104-topic-4-question-42-discussion/) You create the following resources in an Azure subscription: ✑ An Azure Container Registry instance named Registry1 ✑ An Azure Kubernetes Service (AKS) cluster named Cluster1 You create a container image named App1 on your administrative workstation. You need to deploy App1 to Cluster1. What should you do first? A. Run the docker push command. B. Create an App Service plan. C. Run the az acr build command. D. Run the az aks create command.
The best answer is **A. Run the docker push command.** However, there is disagreement in the discussion regarding the correct answer. Some argue for option C. **Explanation:** To deploy the container image `App1` to `Cluster1`, you first need to push the image to a container registry that AKS can access. Option A directly addresses this by pushing the image using the `docker push` command. Before pushing, you would need to tag the image appropriately for your registry. Assuming Registry1 is properly configured and integrated with AKS, pushing the image is the initial step. **Why other options are incorrect:** * **B. Create an App Service plan:** App Service is a different Azure service for hosting web apps and is not directly related to deploying to an AKS cluster. * **C. Run the az acr build command:** While `az acr build` is a valid command to build and push an image to Azure Container Registry, it's not necessarily the *first* step. You could have already built the image (as stated in the question). The question specifies the image `App1` already exists. Using `az acr build` implies building the image, which is redundant in this context. * **D. Run the az aks create command:** This command creates the AKS cluster. The question states the cluster `Cluster1` already exists. **Note:** The discussion highlights disagreement. Some users advocate for option C, arguing that using `az acr build` automatically pushes the image, eliminating the need for a separate `docker push` command. However, the question's premise is that `App1` is *already* created. Therefore, option A is the most direct and efficient first step in the given scenario.
279
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38792-exam-az-104-topic-3-question-8-discussion/) You have an Azure subscription that contains a storage account. You have an on-premises server named Server1 that runs Windows Server 2016. Server1 has 2 TB of data. You need to transfer the data to the storage account by using the Azure Import/Export service. In which order should you perform the actions? Move all actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select. Select and Place: (Image displays a list of actions including: Attach an external disk to Server1 and then run waimportexport.exe; Create an import job in your target storage account in Azure portal; Upload the drive journal files; Detach the external disks from Server1 and ship the disks to an Azure data center; Provide the return address and carrier account number for shipping the drives back to you; Ship the disk drives to the shipping address provided during job creation; From the Azure portal, update the import job; Update the delivery tracking number in the import job details and submit the import job; The drives are received and processed at the Azure data center; The drives are shipped using your carrier account to the return address provided in the import job.) **
** The correct order of actions to import data to Azure Storage using the Azure Import/Export service is: 1. **Prepare the drives:** Attach external disks to Server1 and use the `waimportexport.exe` tool to copy the data onto them. This involves determining the required number of drives and the destination blob location in Azure storage. Encryption of the drives (e.g., with BitLocker) is also a crucial step in this phase. 2. **Create the import job:** In the Azure portal, create an import job within the target storage account. This involves providing necessary information about the shipment. 3. **Ship the drives:** Detach the external drives from Server1 and ship them to the Azure data center using the shipping address provided during job creation. Include the return address and carrier account number. 4. **Update the job with tracking information:** Once the drives are shipped, update the import job in the Azure portal with the delivery tracking number. Submit the job. The Azure data center will receive, process, and return the drives. **Why other options are incorrect:** Any order deviating from this sequence will be incorrect because the Azure Import/Export service requires the sequential steps of data preparation, job creation, shipping, and tracking update. For example, you can't create an import job before the data is prepared on the external disks. **Note:** The provided suggested answer and discussion suggest more than one order may be correct, which is possible depending on how the steps are grouped or the level of granularity. The answer above represents a logical sequence of high-level tasks. Smaller tasks within each step can have some flexibility in their exact ordering.
280
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38803-exam-az-104-topic-4-question-25-discussion/) You deploy an Azure Kubernetes Service (AKS) cluster named Cluster1 that uses the IP addresses shown in the following table. | IP Address | Role | | -------------- | -------- | | 131.107.2.1 | Load Balancer | | 10.0.10.11 | Node | | 172.17.7.1 | Node | | 192.168.10.2 | Node | You need to provide internet users with access to the applications that run in Cluster1. Which IP address should you include in the DNS record for Cluster1? A. 131.107.2.1 B. 10.0.10.11 C. 172.17.7.1 D. 192.168.10.2 **
** A. 131.107.2.1 The correct answer is A because it's the public IP address of the load balancer. To make applications running in the AKS cluster accessible from the internet, you need a public IP address. The load balancer's public IP (131.107.2.1) is the entry point; internet users will access the applications through this IP address. The other options (10.0.10.11, 172.17.7.1, and 192.168.10.2) are private IP addresses within the Kubernetes cluster's network and are not directly routable from the internet. **WHY OTHER OPTIONS ARE INCORRECT:** * **B, C, and D:** These are all private IP addresses (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16 ranges, respectively). Private IP addresses are not globally routable and are used for internal communication within a network. They cannot be used as the public DNS entry point for internet access. There is a consensus among users in the discussion that option A is the correct answer.
281
[View Question](https://www.examtopics.com/discussions/databricks/view/38805-exam-az-104-topic-3-question-14-discussion/) HOTSPOT - You have a sync group named Sync1 that has a cloud endpoint. The cloud endpoint includes a file named File1.txt. Your on-premises network contains servers that run Windows Server 2016. The servers are configured as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0016700001.png) You add Share1 as an endpoint for Sync1. One hour later, you add Share2 as an endpoint for Sync1. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0016700002.jpg)
The correct answer is No, No, Yes. The question is about understanding how Azure file sync handles file synchronization with multiple endpoints added to a sync group. Based on the overwhelmingly consistent agreement in the discussion and the suggested answer (shown in the image), the behavior is as follows: * **No:** File1.txt is *not* deleted from Share1 when Share2 is added. Azure File Sync avoids overwriting files; instead, it manages conflicts by appending numbers to filenames (e.g., File1(1).txt). * **No:** File1.txt is *not* overwritten on Share1 with the file from Share2. Again, it uses versioning to avoid conflicts, not overwriting. * **Yes:** File1.txt will be *synced* to Share2. This is the expected behavior; all endpoints in a sync group eventually synchronize their files. The high number of upvotes supporting the "No, No, Yes" answer across multiple users and across time demonstrates a strong consensus. The comments mentioning testing in a lab further reinforce the validity of this answer. Other options are incorrect because they contradict the observed behavior of Azure File Sync as described in the discussion and illustrated in the suggested answer image. Azure File Sync is designed to prevent data loss from accidental overwrites, hence the versioning behavior.
282
[View Question](https://www.examtopics.com/discussions/databricks/view/38806-exam-az-104-topic-2-question-25-discussion/) HOTSPOT - You have an Azure subscription named Subscription1 that has a subscription ID of c276fc76-9cd4-44c9-99a7-4fd71546436e. You need to create a custom RBAC role named CR1 that meets the following requirements: ✑ Can be assigned only to the resource groups in Subscription1 ✑ Prevents the management of the access permissions for the resource groups ✑ Allows the viewing, creating, modifying, and deleting of resources within the resource groups What should you specify in the assignable scopes and the permission elements of the definition of CR1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
The assignable scope should be `/subscriptions/c276fc76-9cd4-44c9-99a7-4fd71546436e/resourceGroups/*`. The permissions should include actions that allow for viewing, creating, modifying, and deleting resources, but *exclude* actions related to managing RBAC permissions (e.g., those involving `Microsoft.Authorization/*`). The exact permissions will vary depending on the resource types involved, but should focus on actions like `Microsoft.Compute/virtualMachines/*` (for VMs), `Microsoft.Storage/storageAccounts/*` (for storage accounts), and so on – covering all relevant resource types within the resource groups. The provided image in the suggested answer shows the correct selections, but without the context of the specific resource types, a fully comprehensive list of permissions can't be given. The provided image shows the correct selections. Because the question is a hotspot question that requires selecting choices from a given set, an explicit list of the correct choices is impossible without having the original choices that the user has to select from. Why other options are incorrect: Options that include scopes broader than the specified subscription or that include permissions related to managing access control (e.g., assigning roles) would be incorrect as they violate the stated requirements. Similarly, options missing necessary permissions to create, modify, or delete resources within the resource groups would also be incorrect. The provided suggested answer, while visualized as images and not text, is implied to be the correct option by the original author.
283
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38840-exam-az-104-topic-4-question-35-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0027400001.png) VM1 connects to VNET1. You need to connect VM1 to VNET2. Solution: You create a new network interface, and then you add the network interface to VM1. Does this meet the goal? A. Yes B. No **
** B. No The suggested solution of creating a new network interface and adding it to VM1 does *not* successfully move VM1 to VNET2. According to the discussion, you cannot change the VNET a VM is associated with after creation. The only way to move a VM to a different VNET is to delete the VM and recreate it within the desired VNET (VNET2 in this case). The discussion overwhelmingly supports this as the correct solution. **Why other options are incorrect:** A. Yes - This is incorrect because, as explained above, you cannot simply add a network interface from a different VNET to an existing VM and expect it to function within that new VNET. The VM's core association remains with the original VNET. There is a consensus among the users in the discussion that option B is correct.
284
[View Question](https://www.examtopics.com/discussions/databricks/view/38859-exam-az-104-topic-2-question-19-discussion/) You have an Azure Directory (Azure AD) tenant named Adatum and an Azure Subscription named Subscription1. Adatum contains a group named Developers. Subscription1 contains a resource group named Dev. You need to provide the Developers group with the ability to create Azure logic apps in the Dev resource group. Solution: On Subscription1, you assign the Logic App Operator role to the Developers group. Does this meet the goal? A. Yes B. No
B. No. Assigning the Logic App Operator role does not allow the Developers group to *create* logic apps. The Logic App Operator role only permits reading, enabling, and disabling existing logic apps; it does not grant creation or management permissions. To achieve the goal, the Logic App Contributor role is required, as it allows the creation and management of logic apps. WHY OTHER OPTIONS ARE INCORRECT: A. Yes - Incorrect. The Logic App Operator role does not provide the necessary permissions to create logic apps. The provided discussion clearly shows that the Logic App Operator role only allows for reading, enabling, and disabling, but not creating or updating logic apps. NOTE: The discussion unanimously agrees that option B is the correct answer.
285
[View Question](https://www.examtopics.com/discussions/databricks/view/38879-exam-az-104-topic-2-question-20-discussion/) You have an Azure Directory (Azure AD) tenant named Adatum and an Azure Subscription named Subscription1. Adatum contains a group named Developers. Subscription1 contains a resource group named Dev. You need to provide the Developers group with the ability to create Azure logic apps in the Dev resource group. Solution: On Dev, you assign the Contributor role to the Developers group. Does this meet the goal? A. Yes B. No
A. Yes The Contributor role provides the Developers group with the necessary permissions to create Azure Logic Apps within the Dev resource group. The Contributor role allows for the management of all resources within a resource group, including the creation of new resources like Logic Apps. Why other options are incorrect: B. No - This is incorrect because the Contributor role's permissions are sufficient to create Azure Logic Apps. Note: While the discussion mentions the "Logic App Contributor" role as an alternative, the question specifically asks if assigning the Contributor role meets the goal. The consensus is that it does.
286
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38882-exam-az-104-topic-3-question-2-discussion/) You have Azure Storage accounts as shown in the following exhibit. (Image of Azure Storage accounts would be shown here, but is not reproducible in this text format). Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. **Box 1:** Which storage accounts support all of the latest features for blobs, files, queues, and tables? **Box 2:** Which storage accounts support all the same block blob features as GPv2, but are limited to supporting only block blobs? **
** **Box 1:** storageaccount1 and storageaccount2 only **Box 2:** All the storage accounts **Explanation:** The provided text explains that General-purpose v2 (GPv2) accounts support all the latest features for blobs, files, queues, and tables. The image (not shown here but referenced in the question) would visually identify which accounts are GPv2. Therefore, only those accounts would be selected for Box 1. The text also states that Blob storage accounts support all the same block blob features as GPv2, but are limited to supporting only block blobs. Since all storage account types are shown in the image (not included here), "All the storage accounts" is the correct selection for Box 2 because even GPv2 accounts will support block blobs, as it is a subset of their functionality. **Why other options are incorrect:** The question is a multiple-choice-style question within a larger context; other options are not provided within the original text. Without access to the visual image and the other choices offered in Box 1 and Box 2 in the original question, it's impossible to definitively state why other options would be incorrect. **Note:** The discussion section mentions a "correct answer" but doesn't provide the reasoning behind the answer. This response is constructed solely based on the question, the provided text description, and logical inference from the information given.
287
[View Question](https://www.examtopics.com/discussions/databricks/view/38883-exam-az-104-topic-3-question-4-discussion/) **** HOTSPOT - You have an Azure Storage account named storage1. You have an Azure App Service app named App1 and an app named App2 that runs in an Azure container instance. Each app uses a managed identity. You need to ensure that App1 and App2 can read blobs from storage1. The solution must meet the following requirements: ✑ Minimize the number of secrets used. ✑ Ensure that App2 can only read from storage1 for the next 30 days. What should you configure in storage1 for each app? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0014500001.jpg) *(Image shows a table with two rows, one for App1 and one for App2, and two columns, one for "Method to grant access" and one for "Details")* **
** App1: **Access Control (IAM)**. App2: **Shared access signatures (SAS)**. **Explanation:** The question requires minimizing secrets and providing temporary access for App2. Using Azure's Identity and Access Management (IAM) with managed identities for App1 eliminates the need for secrets. App1's managed identity can be granted the necessary permissions (read access to storage1) directly within IAM, thus adhering to the requirement for minimal secrets. For App2, a Shared Access Signature (SAS) is the best option. SAS tokens provide time-limited access to Azure Storage resources. A SAS token with a 30-day expiration can be generated and provided to App2, meeting the requirement of temporary access without requiring ongoing secret management. Using SAS avoids the requirement to manage long-lived credentials that would violate the “minimize the number of secrets” constraint. **Why other options are incorrect:** The question explicitly states that both apps utilize managed identities. Using access keys would directly contradict the requirement to minimize secrets as it requires managing and securely storing access keys. **Note:** The discussion shows overwhelming agreement on this solution. There is no significant disagreement within the provided discussion.
288
[View Question](https://www.examtopics.com/discussions/databricks/view/38884-exam-az-104-topic-3-question-10-discussion/) You have an Azure subscription named Subscription1. You create an Azure Storage account named contosostorage, and then you create a file share named data. Which UNC path should you include in a script that references files from the data file share? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once or not at all. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0015700001.jpg) Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0015800001.jpg) Box 1: contosostorage - The name of account - Box 2: file.core.windows.net - Box 3: data - The name of the file share is data. Example: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0015900001.png)
The correct UNC path is `\\contosostorage.file.core.windows.net\data`. This combines the storage account name (`contosostorage`), the standard Azure Files endpoint (`file.core.windows.net`), and the file share name (`data`) into a correctly formatted UNC path. The double backslashes (`\\`) at the beginning are crucial for specifying a UNC path. The provided example image and suggested answer within the question also confirm this structure. Why other options are incorrect: The question doesn't provide any other explicit options; the discussion shows some minor variations in formatting (e.g., using forward slashes instead of backslashes), but the core structure remains the same. The variations are likely due to different ways of representing the path; the UNC path requires backslashes. The disagreement in the discussion mainly revolves around slight formatting differences, not the fundamental components of the correct UNC path.
289
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38889-exam-az-104-topic-5-question-36-discussion/) You have an Azure subscription named Subscription1. Subscription1 contains a virtual machine named VM1. You have a computer named Computer1 that runs Windows 10. Computer1 is connected to the Internet. You add a network interface named vm1173 to VM1 as shown in the exhibit. (Click the Exhibit tab.) From Computer1, you attempt to connect to VM1 by using Remote Desktop, but the connection fails. You need to establish a Remote Desktop connection to VM1. What should you do first? A. Change the priority of the RDP rule B. Attach a network interface C. Delete the DenyAllInBound rule D. Start VM1 **
** D. Start VM1 The Remote Desktop connection fails because the virtual machine (VM1) is likely not running. Before any network configurations or rule changes can take effect, the VM must be powered on. Starting the VM is the first and most fundamental step to troubleshooting this problem. **Why other options are incorrect:** * **A. Change the priority of the RDP rule:** This is a network configuration change. The VM must be running before any network rules can be applied or prioritized. * **B. Attach a network interface:** A network interface is already attached, as stated in the question. * **C. Delete the DenyAllInBound rule:** While a DenyAllInbound rule *could* be blocking the connection, the VM needs to be running first for this rule to even be relevant. **Note:** The discussion section contains many comments expressing opinions that the question is poorly worded or trivial. This reflects a disagreement about the question's quality, but does not change the correct answer based on the provided information.
290
[View Question](https://www.examtopics.com/discussions/databricks/view/38890-exam-az-104-topic-5-question-89-discussion/) You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure? A. Floating IP (direct server return) to Enabled B. Floating IP (direct server return) to Disabled C. a health probe D. Session persistence to Client IP and Protocol
The correct answer is **D. Session persistence to Client IP and Protocol**. Session persistence, also known as sticky sessions, ensures that a client's requests are always directed to the same server within the load balancer's pool. By configuring session persistence to Client IP and Protocol, the load balancer uses the client's IP address and protocol to identify the session and direct subsequent requests from that client to the same server. This maintains the session state and ensures consistency for the user. Options A, B, and C are incorrect: * **A. Floating IP (direct server return) to Enabled:** Floating IPs provide a single IP address for multiple servers, but they don't inherently maintain session persistence. The load balancer would still distribute requests across servers without regard to the client's previous interactions. * **B. Floating IP (direct server return) to Disabled:** This is irrelevant to maintaining session persistence; disabling floating IP doesn't solve the problem. * **C. a health probe:** Health probes are essential for ensuring that only healthy servers receive traffic. They don't, however, guarantee that a client's requests stay with a single server. Note: The discussion shows overwhelming agreement on the correct answer.
291
[View Question](https://www.examtopics.com/discussions/databricks/view/38892-exam-az-104-topic-6-question-9-discussion/) You have an Azure Active Directory (Azure AD) tenant named contoso.onmicrosoft.com that contains the users shown in the following table. (A table showing user information would be included here, similar to the description in the original content.) You enable password reset for contoso.onmicrosoft.com. You configure the authentication methods for password reset. For each of the following statements, select Yes if the statement is true. Otherwise, select No. (A hotspot question would follow here with statements about User1 and User2's ability to reset their passwords based on the configuration).
The provided text does not give the exact statements to be evaluated in the hotspot question. However, based on the discussion, the correct answers would likely be NO for both User1 and User2. * **User1:** The discussion indicates that User1 is likely not part of the necessary group (SSPR Group1) to utilize self-service password reset (SSPR), making the statement about their ability to reset their password false. * **User2:** The discussion highlights that security questions alone are insufficient for password reset; at least two authentication methods are required. Since User2 may only have security questions configured, the statement about their password reset capabilities would also be false. The suggested answer in the original post is disputed by the discussion, with multiple users indicating it is incorrect. The core of the disagreement centers around the roles needed to configure security questions (Global Administrator) and the insufficient authentication methods for User2. The User Administrator role's limitations in managing MFA are also discussed. Therefore, the definitive "correct" answer is conditional on the exact statements in the missing hotspot section. Note: Without the exact text of the statements in the hotspot question, this answer is based on inferences from the discussion and general Azure AD password reset functionality. The "correct" answer depends entirely on the content of the missing image.
292
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38893-exam-az-104-topic-6-question-6-discussion/) HOTSPOT - You have an Azure virtual machine named VM1 and a Recovery Services vault named Vault1. You create a backup policy named Policy1 as shown in the exhibit. (Click the Exhibit tab.) ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0053600001.jpg) *(Image shows a backup policy with the following settings: Daily backup, 5 daily backups retained; Weekly backup; Monthly backup.)* You configure the backup of VM1 to use Policy1 on Thursday, January 1 at 1:00 AM. You need to identify the number of available recovery points for VM1. How many recovery points are available on January 8 and January 15? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0053700001.jpg) *(Image is a Hotspot area for selecting the number of recovery points)* **
** * **January 8:** 6 recovery points * **January 15:** 8 recovery points **Explanation:** The backup policy retains 5 daily backups, 1 weekly backup, and 1 monthly backup. The initial backup is on January 1st. * **January 8:** The 5 most recent daily backups (Jan 4-8) are retained. The weekly backup from the previous Sunday (this would be the weekly backup from the Sunday before January 1st) is also included as it would be retained according to the policy. There is also the monthly backup from January 1st. This totals to 6 recovery points (5 daily + 1 weekly + 1 monthly). * **January 15:** The 5 most recent daily backups (Jan 11-15) are retained. The weekly backups from January 8th and January 1st are also retained. Finally, the monthly backup from January 1st is also retained. This totals to 8 recovery points (5 daily + 2 weekly + 1 monthly). **Why other options are incorrect:** The discussion highlights some confusion about the counting of days and whether backups are deleted in a rolling 5-day window. The provided answer above correctly implements a rolling 5-day retention of daily backups as well as the retention policy for weekly and monthly backups as specified in the image of the backup policy. Different interpretations regarding the exact start date of counting recovery points are present in the discussion.
293
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38900-exam-az-104-topic-4-question-40-discussion/) You have an Azure subscription named Subscription1 that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0028300005.png) You create virtual machines in Subscription1 as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0028400001.png) You plan to use Vault1 for the backup of as many virtual machines as possible. Which virtual machines can be backed up to Vault1? A. VM1 only B. VM3 and VMC only C. VM1, VM2, VM3, VMA, VMB, and VMC D. VM1, VM3, VMA, and VMC only E. VM1 and VM3 only **
** D. VM1, VM3, VMA, and VMC only **Explanation:** The key constraint for backing up VMs to a Recovery Services Vault is that both the vault and the VMs must reside in the same Azure region. Vault1 is located in West Europe. Examining the VM locations, only VM1, VM3, VMA, and VMC are in the West Europe region. Therefore, only these VMs can be backed up to Vault1. **Why other options are incorrect:** * **A. VM1 only:** This is incorrect because it excludes other VMs in the West Europe region that are eligible for backup. * **B. VM3 and VMC only:** This is incorrect as it omits VM1 and VMA, which are also in West Europe. * **C. VM1, VM2, VM3, VMA, VMB, and VMC:** This is incorrect because VM2 and VMB are located in North Europe and thus cannot be backed up to Vault1 (West Europe). * **E. VM1 and VM3 only:** This is incorrect because it excludes VMA and VMC, which are also eligible for backup. **Note:** There is a slight disagreement in the discussion regarding the accuracy of option D. One commenter highlights that VMs must be in the same region as the vault, pointing out that some VMs are in a different region (North Europe). However, the prevailing consensus and the suggested answer support option D, based on the provided information assuming a strict interpretation of "same region" requirement. The discussion shows that there could be more nuanced interpretations of the constraints depending on additional, unstated configurations.
294
[View Question](https://www.examtopics.com/discussions/databricks/view/38904-exam-az-104-topic-4-question-41-discussion/) You have an Azure Kubernetes Service (AKS) cluster named AKS1. You need to configure cluster autoscaler for AKS1. Which two tools should you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. the kubectl command B. the az aks command C. the Set-AzVm cmdlet D. the Azure portal E. the Set-AzAks cmdlet
The correct answer is B and D. * **B. the az aks command:** This command-line interface tool is used for managing Azure Kubernetes Service (AKS) clusters. While it doesn't directly manage the cluster autoscaler configuration in the same way the Azure portal does, it interacts with the underlying AKS resources and therefore plays a role in overall cluster management including autoscaling configuration indirectly. * **D. the Azure portal:** The Azure portal provides a graphical user interface (GUI) for managing AKS clusters. Within the portal, you can configure the cluster autoscaler directly within the node pool settings. This is the most direct and common method for setting up and managing the autoscaler. Options A, C, and E are incorrect: * **A. the kubectl command:** Kubectl is the command-line tool for managing Kubernetes *pods* and resources *within* a Kubernetes cluster. It doesn't directly manage the cluster's autoscaling functionality at the node level; that is handled by the cluster autoscaler itself. * **C. the Set-AzVm cmdlet:** This cmdlet is used for managing individual Azure Virtual Machines (VMs), not AKS clusters or their autoscaling configurations. AKS nodes are VMs, but managing the nodes through this cmdlet is not the recommended or typical approach for cluster autoscaling. * **E. the Set-AzAks cmdlet:** This cmdlet (or more accurately, `Set-AzAksCluster`) is used for creating and updating AKS clusters. It doesn't directly manage the cluster autoscaler settings. Note: There is some discussion in the provided text suggesting that AKS is no longer part of the AZ-104 exam. This should be verified with the official exam guide. The answer provided above is based solely on the content of the question and the provided discussion.
295
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38912-exam-az-104-topic-3-question-20-discussion/) You have a Recovery Service vault that you use to test backups. The test backups contain two protected virtual machines. You need to delete the Recovery Services vault. What should you do first? A. From the Recovery Service vault, delete the backup data. B. Modify the disaster recovery properties of each virtual machine. C. Modify the locks of each virtual machine. D. From the Recovery Service vault, stop the backup of each backup item. **
** D. From the Recovery Service vault, stop the backup of each backup item. Before deleting a Recovery Services vault, you must first stop any ongoing backups. This prevents errors during the deletion process and ensures a clean removal of the vault. Deleting the backup data (A) might be a subsequent step, but stopping the backups is the crucial first action. Modifying disaster recovery properties (B) or locks (C) of the virtual machines is irrelevant to deleting the vault itself. **Why other options are incorrect:** * **A. From the Recovery Service vault, delete the backup data:** While eventually you'll need to deal with the backup data, this is not the first step. Stopping active backups is the prerequisite. The discussion highlights some ambiguity around this point, with some suggesting deleting data is sufficient in *some* scenarios, while others highlight the necessity of stopping the backup process first. * **B. Modify the disaster recovery properties of each virtual machine:** This action is unrelated to deleting the Recovery Services vault. * **C. Modify the locks of each virtual machine:** This is also unrelated to deleting the vault. **Note:** The discussion shows some disagreement on the precise steps required, with some suggesting that deleting the backup data might suffice in certain circumstances. However, the consensus and the most reliable approach is to stop the ongoing backups first.
296
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38927-exam-az-104-topic-5-question-63-discussion/) You have an Azure subscription that contains 10 virtual networks. The virtual networks are hosted in separate resource groups. Another administrator plans to create several network security groups (NSGs) in the subscription. You need to ensure that when an NSG is created, it automatically blocks TCP port 8080 between the virtual networks. Solution: You configure a custom policy definition, and then you assign the policy to the subscription. Does this meet the goal? A. Yes B. No **
** A. Yes A custom Azure policy definition can be created to enforce the rule that all new NSGs must include a rule blocking TCP port 8080 between virtual networks. The policy doesn't automatically *create* the NSGs, but it *enforces* a rule on their creation, ensuring that the required security rule is present. This meets the goal of automatically blocking TCP port 8080 between virtual networks upon NSG creation. **Why other options are incorrect:** B. No: This is incorrect because a custom Azure policy *can* be used to achieve the stated goal. While the policy itself doesn't create the NSGs, it mandates a specific configuration (blocking port 8080) for any newly created NSG, effectively achieving the desired outcome. **Note:** There is some disagreement in the discussion regarding whether the solution is truly "automatic." Some argue that while the policy enforces the rule, it doesn't automatically create the NSGs. However, the question focuses on ensuring the rule is in place *upon creation*, and the policy achieves this. Therefore, option A is still considered the correct answer.
297
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38930-exam-az-104-topic-4-question-44-discussion/) You have an Azure subscription named Subscription1. Subscription1 contains a resource group named RG1. RG1 contains resources that were deployed by using templates. You need to view the date and time when the resources were created in RG1. Solution: From the Subscriptions blade, you select the subscription, and then click Resource providers. Does this meet the goal? A. Yes B. No **
** B. No The suggested solution of selecting the subscription and then clicking "Resource providers" does not provide the date and time resources were created within RG1. To view the creation date and time of resources in RG1, you should navigate to the RG1 blade and then click "Deployments". This shows a history of deployments for that resource group, including timestamps. The overwhelming consensus in the provided discussion confirms this. **Why other options are incorrect:** A. Yes is incorrect because the proposed method (checking Resource Providers) does not fulfill the requirement of viewing the creation date and time of resources within RG1. The "Resource Providers" section shows information about available resource providers, not the deployment history of resources within a specific resource group.
298
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38955-exam-az-104-topic-5-question-1-discussion/) HOTSPOT - You have an Azure subscription named Sub1. You plan to deploy a multi-tiered application that will contain the tiers shown in the following table. | Tier | Description | Number of VMs | |-----------------|-----------------------------------------------------------------------------|----------------| | Web Servers | Public-facing web servers | 3 | | Business Logic | Contains the application logic | 2 | | Database | Contains the database for the application | 1 | You need to recommend a networking solution to meet the following requirements: ✑ Ensure that communication between the web servers and the business logic tier spreads equally across the virtual machines. ✑ Protect the web servers from SQL injection attacks. Which Azure resource should you recommend for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** * **Box 1:** An internal load balancer. An Azure Internal Load Balancer (ILB) distributes incoming traffic across multiple virtual machines. In this scenario, it would evenly distribute requests from the web servers to the business logic tier VMs. * **Box 2:** An application gateway with the WAF tier. Azure Application Gateway with Web Application Firewall (WAF) protects web applications from common attacks, including SQL injection. **Why other options are incorrect:** The question specifically asks for the best Azure resource for each requirement. No other options are provided in the original content to be considered incorrect. The provided answer directly addresses the two stated requirements using the most appropriate Azure services. **Note:** The provided solution is based on the information and suggested answer within the given context. There might be other valid approaches depending on specific application needs and architectural considerations not detailed in the question.
299
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38963-exam-az-104-topic-3-question-9-discussion/) You have an Azure subscription that includes the following Azure file shares: (Image shows a list of Azure file shares) You have the following on-premises servers: (Image shows a list of on-premises servers). You create a Storage Sync Service named Sync1 and an Azure File Sync group named Group1. Group1 uses share1 as a cloud endpoint. You register Server1 and Server2 in Sync1. You add D:\Folder1 on Server1 as a server endpoint of Group1. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image shows a table with three statements to evaluate as Yes or No) **
** The correct answer is NNY (No, No, Yes). * **Statement 1 (No):** A sync group can only have one cloud endpoint (Azure file share). Adding more than one would violate the fundamental design of Azure File Sync groups. * **Statement 2 (No):** Azure File Sync does not support multiple server endpoints from the same server within the same sync group. This is a limitation to avoid conflicts and ensure data integrity. Using D:\Folder1 on Server1 as a server endpoint prevents adding another endpoint from Server1 to the same group. * **Statement 3 (Yes):** Multiple server endpoints *can* exist on the same volume, provided they do not overlap in namespaces (e.g., F:\sync1 and F:\sync2) and each endpoint syncs to a unique sync group. This allows for organization and separation of synced data on the server. The discussion shows a consensus on this answer. There's an initial response clearly outlining the reasoning, followed by a supporting comment.
300
[View Question](https://www.examtopics.com/discussions/databricks/view/38993-exam-az-104-topic-5-question-34-discussion/) You have a computer named Computer1 that has a point-to-site VPN connection to an Azure virtual network named VNet1. The point-to-site connection uses a self-signed certificate. From Azure, you download and install the VPN client configuration package on a computer named Computer2. You need to ensure that you can establish a point-to-site VPN connection to VNet1 from Computer2. Solution: You join Computer2 to Azure Active Directory (Azure AD). Does this meet the goal? A. Yes B. No
B. No. Joining Computer2 to Azure Active Directory (Azure AD) does not establish a point-to-site VPN connection. A point-to-site VPN connection requires the client computer (Computer2) to have the correct client certificate installed. The solution described involves exporting the client certificate from Computer1 and installing it on Computer2. Why other options are incorrect: A. Is incorrect because joining Azure AD does not provide the necessary client certificate for the VPN connection. The VPN connection relies on certificate authentication, not Azure AD authentication for this scenario. Note: The discussion clearly indicates a consensus that joining Azure AD is not the correct solution.
301
**** [View Question](https://www.examtopics.com/discussions/databricks/view/38995-exam-az-104-topic-5-question-38-discussion/) You have an Azure subscription that contains the Azure virtual machines shown in the following table. (Image depicting a table of VMs: VM1, VM2, Subnet1, Subnet2, IP addresses are shown). You add inbound security rules to a network security group (NSG) named NSG1 as shown in the following table. (Image depicting a table of NSG rules: Source, Destination, Protocol, Port, Action). You run Azure Network Watcher as shown in the following exhibits. (Images depicting Network Watcher results showing connectivity tests between VM1 and VM2 - one showing connection failure, the other showing the NSG rules applied to VM2). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image showing two boxes labeled "Box 1" and "Box 2", with Yes/No checkboxes next to each). **Box 1:** NSG1 is applied to VM1. **Box 2:** NSG1 is applied to VM2. **
** **Box 1: No** The Network Watcher results clearly show that the NSG is blocking traffic to VM2. Since the problem is with incoming traffic to VM2, and NSG1 is configured with rules restricting inbound traffic to subnet 2 where VM2 resides, NSG1 is applied to VM2, not VM1. The fact that traffic from VM1 to VM2 fails due to NSG rules confirms this. **Box 2: Yes** The Network Watcher output demonstrates that NSG1 is blocking traffic destined for VM2 on a specific TCP port. This directly indicates that NSG1 is applied to VM2's subnet (Subnet2), hence affecting its inbound traffic. The failure of the connectivity test due to NSG rules confirms NSG1's application to VM2. **Why other options are incorrect:** The provided images and analysis show that NSG1's rules directly affect VM2's inbound connectivity. There is no evidence to suggest NSG1 is applied to VM1. The discussion highlights that Network Watcher's output explicitly indicates the blocking of traffic due to NSG rules on VM2's subnet. Therefore, selecting "No" for Box 1 and "Yes" for Box 2 is the only correct interpretation of the given data. There is no indication of disagreement in the provided discussion; the suggested answer aligns with the reasoning provided.
302
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39002-exam-az-104-topic-5-question-88-discussion/) You have an Azure subscription that contains a virtual network named VNet1. VNet1 uses an IP address space of 10.0.0.0/16 and contains the VPN Gateway and subnets in the following table: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0048500001.png) Subnet1 contains a virtual appliance named VM1 that operates as a router. You create a routing table named RT1. You need to route all inbound traffic from the VPN gateway to VNet1 through VM1. How should you configure RT1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0048600001.jpg) **
** * **Box 1: 10.0.0.0/16** (Address prefix): This represents the entire address space of VNet1, ensuring all inbound traffic destined for VNet1 is captured by this route. * **Box 2: Virtual appliance** (Next hop type): This specifies that the traffic should be routed through the virtual appliance VM1, which acts as a router. The IP address of VM1 would be specified during the next hop configuration. * **Box 3: GatewaySubnet** (Assigned to): The routing table (RT1) needs to be associated with the GatewaySubnet. This ensures that the VPN gateway uses this routing table to forward all inbound traffic according to the rules defined within RT1. The provided solution is consistent across multiple user responses in the discussion. There is no significant disagreement on the correct answer in the provided text.
303
[View Question](https://www.examtopics.com/discussions/databricks/view/39007-exam-az-104-topic-5-question-99-discussion/) You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. the Publish-AzVMDscConfiguration cmdlet B. Azure Application Insights C. Azure Custom Script Extension D. the New-AzConfigurationAssignement cmdlet
C. Azure Custom Script Extension Azure Custom Script Extension allows you to run custom scripts during the VM deployment process. This is ideal for installing software like NGINX on all VMs in a scale set. The script would simply download and install NGINX. Why other options are incorrect: * **A. the Publish-AzVMDscConfiguration cmdlet:** While DSC (Desired State Configuration) can manage configurations, it's a more complex and potentially slower approach than a custom script for a simple installation task like this. This cmdlet is part of a broader DSC management solution. * **B. Azure Application Insights:** This is a monitoring service, not a deployment tool; it wouldn't install NGINX. * **D. the New-AzConfigurationAssignement cmdlet:** Similar to A, this also relates to DSC configuration management and is not directly suited for the simple installation of NGINX. Note: The discussion indicates that a Desired State Configuration (DSC) extension *could* also be a correct answer in similar exam questions. This highlights that the phrasing or context within the actual exam question may influence the most appropriate response. The provided answer prioritizes the most straightforward and efficient solution for this particular question context.
304
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39015-exam-az-104-topic-5-question-6-discussion/) You have an Azure virtual network named VNet1 that connects to your on-premises network by using a site-to-site VPN. VNet1 contains one subnet named Subnet1. Subnet1 is associated to a network security group (NSG) named NSG1. Subnet1 contains a basic internal load balancer named ILB1. ILB1 has three Azure virtual machines in the backend pool. You need to collect data about the IP addresses that connect to ILB1. You must be able to run interactive queries from the Azure portal against the collected data. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0035600001.png) **
** The correct answer is: * **Box 1:** An Azure Log Analytics workspace * **Box 2:** NSG1 **Explanation:** To collect and query IP address data connecting to the internal load balancer (ILB1), you need two components: 1. **Azure Log Analytics workspace:** This workspace provides the data repository and query capabilities needed to analyze the collected data. The data from NSG flow logs will be stored and accessed here. 2. **NSG1 (Network Security Group):** Enabling diagnostic settings on NSG1 will capture flow logs. These logs contain information about network traffic, including the source IP addresses connecting to resources within the subnet (and therefore, the ILB). This data is then sent to the Log Analytics workspace for analysis. **Why other options are incorrect:** The question specifically requires the ability to run *interactive queries* from the Azure portal. While other monitoring options might exist, they wouldn't necessarily provide the interactive query functionality offered by a Log Analytics workspace. There is no disagreement reflected in the provided discussion; the suggested answer is clearly supported.
305
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39017-exam-az-104-topic-5-question-10-discussion/) You have an Azure subscription. The subscription contains virtual machines that run Windows Server 2016 and are configured as shown in the following table. [Image of table showing VM names (VM1, VM2, VM3), VNET (VNET2 for all three), and Private IP addresses] You create a public Azure DNS zone named adatum.com and a private Azure DNS zone named contoso.com. You create a virtual network link for contoso.com as shown in the following exhibit. [Image showing virtual network link configuration for contoso.com linked to VNET2] For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. [Image showing three statements:] 1. VM1 will auto-register its host record to contoso.com. 2. VM2 will auto-register its host record to contoso.com. 3. VM3 will auto-register its host record to adatum.com. **
** 1. **Yes:** VM1 will auto-register its host record to contoso.com. Auto-registration is enabled for the private DNS zone `contoso.com`, and VM1 resides in VNET2, which is linked to this private zone. 2. **Yes:** VM2 will auto-register its host record to contoso.com. Same reasoning as above; VM2 is in VNET2, linked to the private zone with auto-registration enabled. 3. **No:** VM3 will not auto-register its host record to adatum.com. `adatum.com` is a public DNS zone. Private IP addresses (like those assigned to the VMs) cannot be registered in public DNS zones. Furthermore, even if it were possible, the auto-registration feature only applies to private DNS zones. **Why other options are incorrect:** The discussion shows some disagreement on whether VNET1 or VNET2 is linked to the private DNS zone. However, the provided images clearly indicate VNET2 is linked. Therefore, the answer is based on the images provided, acknowledging that some differing opinions exist in the forum discussion.
306
[View Question](https://www.examtopics.com/discussions/databricks/view/39019-exam-az-104-topic-5-question-12-discussion/) DRAG DROP - You have an Azure subscription that contains two virtual networks named VNet1 and VNet2. Virtual machines connect to the virtual networks. The virtual networks have the address spaces and the subnets configured as shown in the following table. | Virtual Network | Address space | Subnet Name | Subnet Address | |---|---|---|---| | VNet1 | 10.11.0.0/16 | Subnet1 | 10.11.0.0/24 | | VNet2 | 10.22.0.0/16 | Subnet2 | 10.22.0.0/24 | You need to add the address space of 10.33.0.0/16 to VNet1. The solution must ensure that the hosts on VNet1 and VNet2 can communicate. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: **(Image depicting three actions: Remove peering between VNet1 and VNet2; Add the 10.33.0.0/16 address space to VNet1; Recreate peering between VNet1 and VNet2)**
The correct sequence of actions is: 1. **Remove peering between VNet1 and VNet2:** Adding or removing address spaces from a virtual network is not permitted while peering is active. Peering must be removed first. 2. **Add the 10.33.0.0/16 address space to VNet1:** This adds the required address space to VNet1. 3. **Recreate peering between VNet1 and VNet2:** After adding the address space, re-establish the peering connection between the two virtual networks to restore communication. **Why other options are incorrect:** There are no other options presented in the question's context; the provided answer is the only sequence that addresses the problem correctly. The order is crucial because the address space cannot be modified while peering exists. **Note:** There is a conflicting opinion in the discussion. While the provided answer and most comments suggest removing and recreating peering to add an address space, one comment mentions that since September 2022, this may no longer be necessary. This answer reflects the most commonly accepted and historically accurate method based on the majority of the discussion's content. However, users should refer to the current official Microsoft documentation for the most up-to-date information on VNet peering and address space management.
307
[View Question](https://www.examtopics.com/discussions/databricks/view/39032-exam-az-104-topic-5-question-18-discussion/) You have an Azure subscription that contains the following resources: ✑ A virtual network that has a subnet named Subnet1 ✑ Two network security groups (NSGs) named NSG-VM1 and NSG-Subnet1 ✑ A virtual machine named VM1 that has the required Windows Server configurations to allow Remote Desktop connections NSG-Subnet1 has the default inbound security rules only. NSG-VM1 has the default inbound security rules and the following custom inbound security rule: ✑ Priority: 100 ✑ Source: Any ✑ Source port range: * ✑ Destination: * ✑ Destination port range: 3389 Protocol: UDP - ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0037200009.png) ✑ Action: Allow VM1 has a public IP address and is connected to Subnet1. NSG-VM1 is associated to the network interface of VM1. NSG-Subnet1 is associated to Subnet1. You need to be able to establish Remote Desktop connections from the internet to VM1. Solution: You add an inbound security rule to NSG-Subnet1 that allows connections from the internet source to the VirtualNetwork destination for port range 3389 and uses the UDP protocol. Does this meet the goal? A. Yes B. No
B. No The solution does not meet the goal because Remote Desktop Protocol (RDP) uses TCP port 3389 by default, not UDP port 3389. The proposed solution adds a UDP rule, which will not allow RDP connections. While some modifications could be made to allow UDP, this is not the default and therefore the answer is no. The discussion highlights some disagreement. While the primary consensus is that the answer is 'B' because RDP uses TCP, one user points out that it's *possible* to configure RDP to use UDP, though this is not standard practice. However, since the question asks if the *provided solution* meets the goal without additional configuration, and the solution uses UDP, it fails.
308
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39033-exam-az-104-topic-5-question-19-discussion/) You have an Azure subscription that contains the following resources: ✑ A virtual network that has a subnet named Subnet1 ✑ Two network security groups (NSGs) named NSG-VM1 and NSG-Subnet1 ✑ A virtual machine named VM1 that has the required Windows Server configurations to allow Remote Desktop connections NSG-Subnet1 has the default inbound security rules only. NSG-VM1 has the default inbound security rules and the following custom inbound security rule: ✑ Priority: 100 ✑ Source: Any ✑ Source port range: * ✑ Destination: * ✑ Destination port range: 3389 ✑ Protocol: UDP ✑ Action: Allow VM1 has a public IP address and is connected to Subnet1. NSG-VM1 is associated to the network interface of VM1. NSG-Subnet1 is associated to Subnet1. You need to be able to establish Remote Desktop connections from the internet to VM1. Solution: You add an inbound security rule to NSG-Subnet1 and NSG-VM1 that allows connections from the internet source to the VirtualNetwork destination for port range 3389 and uses the TCP protocol. Does this meet the goal? A. Yes B. No **
** A. Yes The suggested solution correctly addresses the problem. Remote Desktop uses TCP port 3389. The existing rule on NSG-VM1 allows UDP traffic on port 3389. Adding a TCP rule to both NSGs (NSG-VM1 and NSG-Subnet1) for port 3389 will allow the RDP connections because the rules are evaluated sequentially. Since the protocols (TCP and UDP) are different, the existing UDP rule will be bypassed, and the newly added TCP rule will be matched. Therefore, the RDP connection will be allowed. **Why other options are incorrect:** B. No - This is incorrect because the solution correctly adds the necessary rule to allow TCP traffic on port 3389, which is required for RDP. The existing UDP rule doesn't conflict because it's a different protocol. **Note:** The discussion shows some debate on the exact behavior of NSG rule evaluation. However, the consensus points towards the solution's correctness based on how Azure NSGs handle different protocols. The different rules, in this case, using UDP and TCP, are treated as separate independent rules.
309
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39037-exam-az-104-topic-5-question-25-discussion/) You have an Azure subscription named Subscription1 that contains the virtual networks in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0038100001.png) Subscription1 contains the virtual machines in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0038200001.png) In Subscription1, you create a load balancer that has the following configurations: ✑ Name: LB1 ✑ SKU: Basic ✑ Type: Internal ✑ Subnet: Subnet12 ✑ Virtual network: VNET1 For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0038200007.jpg) **
** The correct answers are: * **Box 1: Yes** - VM1 and VM2 are in the same Availability Set (AS1). A Basic Load Balancer can load balance VMs within a single Availability Set. * **Box 2: No** - VM3 and VM4 are not in the same Availability Set or Virtual Machine Scale Set. A Basic Load Balancer requires VMs to be in a single Availability Set or Virtual Machine Scale Set to function correctly. * **Box 3: No** - Same reasoning as Box 2; VM3 and VM4 are not in an Availability Set or Virtual Machine Scale Set. **Explanation:** The Azure Basic Load Balancer SKU has limitations. It only supports load balancing virtual machines within a *single* Availability Set or Virtual Machine Scale Set. It does *not* load balance across multiple Availability Sets or individual, unrelated VMs. The subnet association (Subnet12) is used for the load balancer's IP address, not for defining which VMs it balances. **Why other options are incorrect:** The discussion shows consensus that a Basic Load Balancer cannot handle individual, unrelated VMs or VMs across multiple Availability Sets. Therefore, selecting "Yes" for boxes 2 and 3 would be incorrect. **Note:** The provided images from the original question are missing, making the exact VM names and details unclear. However, the discussion clarifies that the VMs' placement within Availability Sets or Scale Sets determines the correctness of the answer.
310
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39038-exam-az-104-topic-5-question-26-discussion/) You have an Azure virtual machine that runs Windows Server 2019 and has the following configurations: ✑ Name: VM1 ✑ Location: West US ✑ Connected to: VNET1 ✑ Private IP address: 10.1.0.4 ✑ Public IP addresses: 52.186.85.63 ✑ DNS suffix in Windows Server: Adatum.com You create the Azure DNS zones shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0038400001.png) *(Image shows a table with "Name" and "Type" columns. The exact contents are not provided in the text, but it's clear the table lists various DNS zones with types "Private" and "Public".)* You need to identify which DNS zones you can link to VNET1 and the DNS zones to which VM1 can automatically register. Which zones should you identify? **
** Only **Private** DNS zones can be linked to VNET1, and only **Private** DNS zones can be used for automatic registration of VM1. **Explanation:** Azure Virtual Networks (VNETs) can only be linked to private DNS zones. Automatic registration of a VM within a VNET is also only possible with private DNS zones that have the auto-registration option enabled. Public DNS zones cannot be linked to VNETs and therefore VM1 cannot automatically register with them. **Why other options are incorrect:** The question implies a choice between private and public DNS zones. Any option including "Public" zones for either linking to the VNET or automatic registration would be incorrect based on Azure DNS functionality. **Note:** The provided discussion shows unanimous agreement on this answer.
311
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39040-exam-az-104-topic-5-question-27-discussion/) DRAG DROP - You have an on-premises network that you plan to connect to Azure by using a site-to-site VPN. In Azure, you have an Azure virtual network named VNet1 that uses an address space of 10.0.0.0/16 VNet1 contains a subnet named Subnet1 that uses an address space of 10.0.0.0/24. You need to create a site-to-site VPN to Azure. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choice is correct. You will receive credit for any of the correct orders you select. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0038600001.png) *(Image text not visible)* **
** The correct sequence of actions to create a site-to-site VPN, based on the provided information and suggested answer, involves these four steps (order may vary slightly, as noted in the question): 1. **Create a gateway subnet in VNet1:** This is the foundational step. A dedicated subnet is required to host the VPN gateway. Without this, the next steps are impossible. 2. **Create a VPN gateway:** This creates the VPN gateway resource in Azure. This gateway is then associated with the gateway subnet created in the previous step. 3. **Configure the on-premises VPN device:** This involves configuring the VPN device on your on-premises network with the necessary settings to connect to the Azure VPN gateway. Specific configurations (like pre-shared keys) depend on the VPN device type. 4. **Establish the VPN connection:** This final step connects the on-premises network to the Azure VPN gateway, making the VPN connection active. **Why other options are incorrect:** The provided text only shows the correct answer and doesn't list alternative options. The problem implies that multiple correct sequences of steps might exist, focusing on the dependency of creating Azure components before configuring the on-premises side. Any sequence that doesn't start by creating the Azure gateway subnet and gateway would be incorrect because those are prerequisites. **Note:** The discussion mentions that the order of steps may vary, emphasizing the importance of understanding the dependencies. The provided answer reflects the core steps and their dependencies for setting up a site-to-site VPN in this scenario. It's crucial to remember that the specific configuration details will vary based on the on-premises VPN device and chosen VPN type.
312
[View Question](https://www.examtopics.com/discussions/databricks/view/39042-exam-az-104-topic-5-question-28-discussion/) You have an Azure subscription that contains the resources in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0038700002.png) VM1 and VM2 are deployed from the same template and host line-of-business applications. You configure the network security group (NSG) shown in the exhibit. (Click the Exhibit tab.) ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0038800001.jpg) You need to prevent users of VM1 and VM2 from accessing websites on the Internet over TCP port 80. What should you do? A. Disassociate the NSG from a network interface B. Change the Port_80 inbound security rule. C. Associate the NSG to Subnet1. D. Change the DenyWebSites outbound security rule.
C. Associate the NSG to Subnet1. The outbound rule "DenyWebSites" is designed to block outbound internet traffic on port 80. However, the image shows that the NSG is not currently associated with any subnets or NICs ("Associated with: 0 subnets, 0 NIC's"). To make the DenyWebSites rule effective and block VM1 and VM2 from accessing the internet on port 80, the NSG must be associated with Subnet1, where these VMs reside. Why other options are incorrect: * **A. Disassociate the NSG from a network interface:** This would remove *all* security rules, allowing unrestricted access, the opposite of the desired outcome. * **B. Change the Port_80 inbound security rule:** This rule controls inbound traffic to the VMs, not outbound traffic to the internet. The problem is with outbound traffic. * **D. Change the DenyWebSites outbound security rule:** The rule is already correctly configured to deny outbound traffic on port 80. The issue is that the rule isn't applied to the VMs because the NSG isn't associated with their subnet. Note: The provided solution and explanation are based solely on the information presented in the question and the discussion. There may be other valid approaches or considerations depending on a more complete understanding of the Azure environment.
313
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39044-exam-az-104-topic-6-question-1-discussion/) You have an Azure subscription that has a Recovery Services vault named Vault1. The subscription contains the virtual machines shown in the following table: | VM Name | OS | |---|---| | VM1 | Windows Server 2019 | | VM2 | Windows 10 | | VM3 | Ubuntu Server 18.04 | | VM4 | Windows Server 2016 | You plan to schedule backups to occur every night at 23:00. Which virtual machines can you back up by using Azure Backup? A. VM1 and VM3 only B. VM1, VM2, VM3 and VM4 C. VM1 and VM2 only D. VM1 only **
** B. VM1, VM2, VM3, and VM4 Azure Backup supports backing up all the listed VMs. The discussion points out Azure Backup's support for 64-bit Windows Server OS (including 2016 and 2019), 64-bit Windows 10, and 64-bit Ubuntu Server (including versions 18.04). Therefore, all four VMs (VM1, VM2, VM3, and VM4) meet the supported OS requirements. **WHY OTHER OPTIONS ARE INCORRECT:** * **A. VM1 and VM3 only:** This is incorrect because it excludes VMs that are supported by Azure Backup (VM2 and VM4). * **C. VM1 and VM2 only:** This is incorrect because it excludes supported VMs (VM3 and VM4). * **D. VM1 only:** This is incorrect because it excludes several VMs supported by Azure Backup (VM2, VM3, and VM4). **NOTE:** The provided answer is based solely on the given text and discussion. There might be other factors or specific configurations that could affect Azure Backup compatibility in a real-world scenario. The discussion acknowledges Azure Backup's support for various operating systems, implying that additional unsupported OSes might exist outside the scope of this question.
314
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39045-exam-az-104-topic-6-question-4-discussion/) You have an Azure subscription that contains an Azure Storage account named storage1 and the users shown in the following table. [Image of User Table] *(Text from image is needed to fully answer the question but not provided in the prompt)* You plan to monitor storage1 and to configure email notifications for the signals shown in the following table. [Image of Signals Table] *(Text from image is needed to fully answer the question but not provided in the prompt)* You need to identify the minimum number of alert rules and action groups required for the planned monitoring. How many alert rules and action groups should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** The question requires images to provide a full and accurate answer. The provided text describes a scenario where the number of alert rules and action groups needed depends on the specific signals and user notification requirements detailed in the missing image tables. Based on the discussion, the consensus is: * **Alert Rules:** 4 (One alert rule per signal: Ingress, Egress, Delete storage account, Restore blob ranges). * **Action Groups:** There is disagreement here. `mlantonis` and `Holydud` suggest 3 action groups, while `skydivex` suggests 2. The discrepancy likely stems from an interpretation of whether an existing action group (potentially "Group1" mentioned by `skydivex`) can be reused or if new groups need to be created for each unique user notification combination. A definitive answer needs the missing image data. **Why other options are incorrect (partially based on discussion):** Options with fewer than 4 alert rules are incorrect because the discussion indicates that each signal requires a separate alert rule. Options with more than 4 alert rules are unnecessarily complex. The number of action groups is contentious and cannot be definitively determined without the missing image data. **Note:** This answer is incomplete without the content of the missing images showing the users and signals tables. The provided discussion offers partial, potentially conflicting, answers.
315
[View Question](https://www.examtopics.com/discussions/databricks/view/39132-exam-az-104-topic-3-question-11-discussion/) HOTSPOT - You have an Azure subscription that contains an Azure Storage account. You plan to copy an on-premises virtual machine image to a container named vmimages. You need to create the container for the planned image. Which command should you run? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0016000001.png) *(Image shows a Hotspot question area, the exact content is not relevant to the answer)* ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0016100001.png) *(Image shows the suggested answer: `azcopy make 'https://mystorageaccount.blob.core.windows.net/vmimages'`)*
The correct command to create the container is `azcopy make 'https://mystorageaccount.blob.core.windows.net/vmimages'`. This command utilizes the AzCopy tool to create a container named "vmimages" within the specified Azure Storage account ("mystorageaccount"). AzCopy is a command-line utility for copying data to and from Azure Blob Storage. The `make` command is specifically designed for creating containers. Note that "mystorageaccount" should be replaced with the actual name of your storage account. There are no other options provided in the original context to evaluate. The discussion indicates a consensus that this is the correct answer.
316
[View Question](https://www.examtopics.com/discussions/databricks/view/39142-exam-az-104-topic-3-question-15-discussion/) You have an Azure subscription that contains the storage accounts shown in the following table. | Storage Account | Account Type | Replication Type | |---|---|---| | storage1 | General-purpose v2 | Locally Redundant Storage (LRS) | | storage2 | General-purpose v2 | Locally Redundant Storage (LRS) | | storage3 | General-purpose v1 | Geo-Redundant Storage (GRS) | | storage4 | Blob storage | Read-access geo-redundant storage (RA-GRS) | You need to identify which storage account can be converted to zone-redundant storage (ZRS) replication by requesting a live migration from Azure support. What should you identify? A. storage1 B. storage2 C. storage3 D. storage4
B. storage2 The correct answer is B because only storage accounts of type "General-purpose v2" using LRS or GRS replication are eligible for live migration to ZRS. Storage account "storage2" meets this criteria. Why other options are incorrect: * **A. storage1:** While storage1 is a General-purpose v2 account and uses LRS, the question specifies a *live migration* to ZRS. The provided discussion clarifies that live migration requirements might exclude certain combinations or configurations, although the precise details were not provided. * **C. storage3:** storage3 is a General-purpose v1 account. ZRS only supports General-purpose v2 accounts. * **D. storage4:** storage4 uses RA-GRS replication. The discussion mentions that a change to LRS or GRS is required before live migration is possible for RA-GRS accounts. Note: The discussion section expresses significant disagreement with the question's validity and relevance, arguing that the knowledge being tested is obscure and impractical. The answer provided here is based solely on the information given in the question and the minimal clarifying information in the discussion thread regarding live migration constraints.
317
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39155-exam-az-104-topic-3-question-33-discussion/) You have an Azure subscription named Subscription1. In Subscription1, you create an Azure file share named share1. You create a shared access signature (SAS) named SAS1 as shown in the following exhibit: [Image of SAS details showing an IP address range restriction, likely 10.0.0.0-10.255.255.255] To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: [Image showing two scenarios: Scenario 1 uses IP address 193.77.134.1 to access share1; Scenario 2 uses "net use" command with a SAS token to access share1.] **
** Both Scenario 1 and Scenario 2 will have no access to share1. **Explanation:** * **Scenario 1:** The provided IP address (193.77.134.1) falls outside the IP address range specified in the SAS (likely 10.0.0.0-10.255.255.255, based on the image description). Therefore, access is denied because the request does not meet the SAS's IP restriction. * **Scenario 2:** The `net use` command utilizes the Server Message Block (SMB) protocol. SMB does not support SAS authentication. While a SAS token might be *included* in the command, the SMB protocol itself requires separate username/password authentication, which will likely fail if only the SAS is provided. **Why other options are incorrect:** The suggested answer in the original post is disputed. The discussion clearly indicates that the original "suggested answer" is incorrect, arguing that access would be denied in both scenarios. The reasoning provided is that the IP address is outside of the allowed range, and that SMB does not support SAS authentication directly. **Note:** There is a clear disagreement about the correct answer in the original discussion. The provided answer reflects the consensus among the commenters that neither scenario would have access.
318
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39196-exam-az-104-topic-6-question-14-discussion/) You have an existing Azure subscription that contains 10 virtual machines. You need to monitor the latency between your on-premises network and the virtual machines. What should you use? A. Service Map B. Connection troubleshoot C. Network Performance Monitor D. Effective routes **
** C. Network Performance Monitor The discussion indicates that Network Performance Monitor is designed to monitor latency and network issues in hybrid environments, including on-premises networks and Azure VMs. This aligns perfectly with the question's requirement to monitor latency between the on-premises network and the Azure VMs. **Why other options are incorrect:** * **A. Service Map:** Service Map is not directly related to monitoring network latency between on-premises and Azure environments. * **B. Connection troubleshoot:** While part of Network Watcher, Connection troubleshoot performs one-time latency checks, not ongoing monitoring as required by the question. * **D. Effective routes:** Effective routes display routing information but don't directly monitor latency. **Note:** There is some disagreement in the discussion regarding whether Network Performance Monitor is the best or even current solution. Some comments suggest that "Connection Monitor" within Network Watcher might be a more appropriate choice, especially given potential deprecation of Network Performance Monitor. However, based solely on the provided text and the suggested answer, Network Performance Monitor is selected as the best answer.
319
[View Question](https://www.examtopics.com/discussions/databricks/view/39279-exam-az-104-topic-5-question-70-discussion/) You have an Azure subscription that contains the resources in the following table: (Image of table showing VM names and VNet association omitted as it is not directly relevant to the core question, which focuses on the DNS configuration.) In Azure, you create a private DNS zone named adatum.com. You set the registration virtual network to VNet2. The adatum.com zone is configured as shown in the following exhibit: (Image showing DNS zone configuration omitted as it is not directly relevant to the core question.) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 1. VM5 can register itself with the adatum.com zone. 2. VM5 can resolve DNS names in the adatum.com zone. 3. VM6 can resolve DNS names in the adatum.com zone.
No, No, Yes. VM5 is in VNet1, which is not the registration virtual network (VNet2) for the adatum.com zone. Therefore, VM5 cannot register itself (statement 1) or resolve DNS names (statement 2) within that zone. VM6, however, resides in VNet2, the registration virtual network. Since registration virtual networks also function as resolution virtual networks by default, VM6 can successfully resolve DNS names within the adatum.com zone (statement 3). The discussion shows a consensus on this answer.
320
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39307-exam-az-104-topic-5-question-11-discussion/) You have an Azure subscription that contains the resources in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0036300001.png) To which subnets can you apply NSG1? A. the subnets on VNet1 only B. the subnets on VNet2 and VNet3 only C. the subnets on VNet2 only D. the subnets on VNet3 only E. the subnets on VNet1, VNet2, and VNet3 **
** D. the subnets on VNet3 only **Explanation:** Network Security Groups (NSGs) in Azure can only be applied to subnets within the same Azure region. The image (though not provided here directly, its contents are referenced in the discussion) shows that NSG1 is located in the East US region, and only VNet3 is also located in East US. Therefore, NSG1 can only be applied to the subnets within VNet3. **Why other options are incorrect:** * **A, B, C, E:** These options include subnets from VNet1 and/or VNet2, which are located in different regions than NSG1 (based on the unseen image). Applying NSGs across regions is not possible. **Note:** The discussion shows unanimous agreement on answer D.
321
[View Question](https://www.examtopics.com/discussions/databricks/view/39309-exam-az-104-topic-5-question-15-discussion/) You create an Azure VM named VM1 that runs Windows Server 2019. VM1 is configured as shown in the exhibit. (Click the Exhibit tab.) You need to enable Desired State Configuration for VM1. What should you do first? A. Connect to VM1. B. Start VM1. C. Capture a snapshot of VM1. D. Configure a DNS name for VM1.
The correct answer is **B. Start VM1.** The VM's status is shown as "Stopped (Deallocated)". The DSC (Desired State Configuration) extension for Windows requires the VM to be running and able to communicate with Azure to function. Therefore, starting the VM is the necessary first step before attempting to enable DSC. Why other options are incorrect: * **A. Connect to VM1:** You cannot connect to a stopped VM. The VM must be started first. * **C. Capture a snapshot of VM1:** Taking a snapshot is irrelevant to enabling DSC. This action can be performed after the VM is started and DSC is configured. * **D. Configure a DNS name for VM1:** While a DNS name is helpful for management, it's not a prerequisite for enabling DSC. The VM only needs network connectivity to Azure, which is established once the VM is started. Note: The provided discussion shows a consensus on the correct answer.
322
[View Question](https://www.examtopics.com/discussions/databricks/view/39403-exam-az-104-topic-2-question-58-discussion/) You have an Azure subscription that contains the following users in an Azure Active Directory tenant named contoso.onmicrosoft.com: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0012100001.png) User1 creates a new Azure Active Directory tenant named external.contoso.onmicrosoft.com. You need to create new user accounts in external.contoso.onmicrosoft.com. Solution: You instruct User2 to create the user accounts. Does that meet the goal? A. Yes B. No
B. No Explanation: User1, having created the `external.contoso.onmicrosoft.com` tenant, automatically becomes the owner and has the necessary permissions to create user accounts. User2, without explicit permission granted by User1, will not have access to the new tenant and therefore cannot create user accounts. The solution of instructing User2 fails to meet the goal. The discussion highlights that the creator of a tenant (User1 in this case) automatically becomes the owner, granting them sole initial administrative access. Why other options are incorrect: A. Yes - This is incorrect because User2 does not have the inherent permissions to create user accounts in a tenant they did not create. Only the tenant owner (User1 in this case) initially has those permissions. Note: The discussion shows some disagreement about the specifics of the global administrator role and access, but the core consensus remains that User2 cannot create accounts without explicit permission from User1.
323
[View Question](https://www.examtopics.com/discussions/databricks/view/39450-exam-az-104-topic-5-question-29-discussion/) You have two subscriptions named Subscription1 and Subscription2. Each subscription is associated with a different Azure AD tenant. Subscription1 contains a virtual network named VNet1. VNet1 contains an Azure virtual machine named VM1 and has an IP address space of 10.0.0.0/16. Subscription2 contains a virtual network named VNet2. VNet2 contains an Azure virtual machine named VM2 and has an IP address space of 10.10.0.0/24. You need to connect VNet1 to VNet2. What should you do first? A. Move VM1 to Subscription2. B. Move VNet1 to Subscription2. C. Modify the IP address space of VNet2. D. Provision virtual network gateways.
D. Provision virtual network gateways. To connect VNet1 and VNet2, you need to establish a connection between them. This is done using virtual network gateways. The gateways act as the transit points for traffic between the two virtual networks. Before any traffic can flow, these gateways must be created and configured. Moving VMs or modifying IP address spaces are not the first steps in establishing this connection. Why other options are incorrect: * **A. Move VM1 to Subscription2:** Moving the VM is unnecessary for connecting the VNets. The connection is at the VNet level. * **B. Move VNet1 to Subscription2:** While possible, this is not the *first* step. The VNets can be connected even if they remain in different subscriptions. Moving the VNet adds unnecessary complexity and isn't required for the connection to function. * **C. Modify the IP address space of VNet2:** The IP address spaces of VNet1 (10.0.0.0/16) and VNet2 (10.10.0.0/24) do not overlap. Overlapping IP addresses would prevent a connection, but in this case, modification is not necessary. The first step is still to provision the gateways. Note: The provided discussion mentions that there is no IP address overlap between the two VNets. This is crucial information to determine that modifying IP addresses is not required.
324
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39579-exam-az-104-topic-10-question-3-discussion/) HOTSPOT - You need to identify the storage requirements for Contoso. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0022000001.jpg) *(Image content unavailable, but based on the discussion, the statements likely relate to using Azure Blob storage, Azure Tables, and Azure Files.)* **
** Box 1: **Yes**. Contoso is moving product blueprint files to Azure Blob storage, and using unmanaged standard storage (Page Blobs) for virtual machine hard disks. The provided discussion explicitly states this requirement. Box 2: **No**. Azure Tables are not required because Contoso is using a SQL database on a virtual machine. Azure Tables are a NoSQL database service, which is not needed in this scenario. Box 3: **No**. Azure Files are not needed. The discussion indicates that Azure Files are used for managed file shares accessed via NFS or SMB protocols, and archiving is not supported (although there is some debate in the discussion about the archiving capability of Azure Files). **Explanation of why other options are incorrect:** The discussion clearly explains why Azure Tables and Azure Files are unsuitable for Contoso's needs. The correct answer aligns with the stated requirements to use Blob storage for blueprint files and Page Blobs for VM hard disks. **Note:** There is a minor disagreement in the discussion regarding the archive capability of Azure Files. The suggested answer and the highly voted comment both indicate that Azure Files do not offer archiving, but a link to Azure feedback suggests a feature request to add cold and archive tiers to Azure Files. However, based on the provided information and the weight of opinion in the discussion, the answer provided reflects the prevailing understanding within the context of the question.
325
**** [View Question](https://www.examtopics.com/discussions/databricks/view/39690-exam-az-104-topic-6-question-3-discussion/) You have the Azure virtual machines shown in the following table: | VM Name | Location | |---|---| | VM1 | East US | | VM2 | East US | | VM3 | West US | | VM4 | West US | You have a Recovery Services vault that protects VM1 and VM2. You need to protect VM3 and VM4 by using Recovery Services. What should you do first? A. Create a new Recovery Services vault B. Create a storage account C. Configure the extensions for VM3 and VM4 D. Create a new backup policy **
** A. Create a new Recovery Services vault. VM3 and VM4 are located in "West US," while the existing Recovery Services vault protects VMs in "East US." Recovery Services vaults are region-specific. To protect VMs in a different region, a new vault must be created within that region ("West US"). Therefore, creating a new Recovery Services vault in the West US region is the first necessary step. **Why other options are incorrect:** * **B. Create a storage account:** Azure automatically creates and manages the storage accounts needed by Recovery Services vaults. This is not the first step. * **C. Configure the extensions for VM3 and VM4:** While you'll eventually need to configure the Recovery Services agent on VM3 and VM4, you must first have a Recovery Services vault in their region to register them with. * **D. Create a new backup policy:** A backup policy defines *how* backups are performed, but a vault is required *where* to store those backups. The vault creation precedes policy creation. **Note:** The provided discussion and answer are in agreement.
326
**** [View Question](https://www.examtopics.com/discussions/databricks/view/40101-exam-az-104-topic-14-question-1-discussion/) HOTSPOT - You need to recommend a solution for App1. The solution must meet the technical requirements. What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **Scenario:** You have a public-facing application named App1. App1 is comprised of the following three tiers: ✑ A SQL database ✑ A web front end ✑ A processing middle tier Each tier is comprised of five virtual machines. Users access the web front end by using HTTPS only. **Technical requirements include:** ✑ Move all the virtual machines for App1 to Azure. ✑ Minimize the number of open ports between the App1 tiers. **
** The correct answer is to use 1 Virtual Network (VNET) and 3 Subnets. * **1 VNET:** Placing all VMs in a single VNET allows for easy communication between the tiers of App1, minimizing network complexity and reducing the need for complex routing. This aligns with the requirement to minimize open ports between tiers, as communication within the same VNET is more secure and controlled than across separate networks. * **3 Subnets:** Creating 3 subnets, one for each tier (Web, Application, Database), enables better security and isolation. Network Security Groups (NSGs) can be implemented on these subnets to further restrict network traffic, allowing only necessary communication between tiers and external access for the web tier via HTTPS only. **Why other options are incorrect:** The provided discussion does not suggest alternative options. The solution focuses on minimizing open ports by using a single VNET and separating the tiers into individual subnets for better control and security. This is the best approach given the constraints. **Note:** The provided solution is based on the information within the given context. There might be alternative valid solutions depending on specific network security policies or other constraints not present in the initial information.
327
[View Question](https://www.examtopics.com/discussions/databricks/view/40965-exam-az-104-topic-4-question-15-discussion/) You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an event subscription on VM1. You create an alert in Azure Monitor and specify VM1 as the source. Does this meet the goal? A. Yes B. No
B. No The proposed solution does not meet the goal. To create an alert based on the number of error events in the System event log within a specified timeframe, you need to use Azure Log Analytics. The solution incorrectly suggests using VM1 as the alert source in Azure Monitor. The correct approach involves: 1. Creating an Azure Log Analytics workspace. 2. Configuring the data settings in the Log Analytics workspace. 3. Installing the Microsoft Monitoring Agent (MMA) on VM1 to collect and send the event log data to the Log Analytics workspace. 4. Creating an alert in Azure Monitor that uses the Log Analytics workspace as the source, specifying the condition of more than two error events within an hour. The alert is triggered based on the data processed within the Log Analytics workspace, not directly from the VM's event logs. Specifying the VM as the source in Azure Monitor is insufficient for this requirement. The discussion confirms this is the correct approach and that option A is therefore incorrect.
328
[View Question](https://www.examtopics.com/discussions/databricks/view/41214-exam-az-104-topic-2-question-38-discussion/) You have an Azure subscription named Subscription1 that contains an Azure virtual machine named VM1. VM1 is in a resource group named RG1. VM1 runs services that will be used to deploy resources to RG1. You need to ensure that a service running on VM1 can manage the resources in RG1 by using the identity of VM1. What should you do first? A. From the Azure portal, modify the Managed Identity settings of VM1 B. From the Azure portal, modify the Access control (IAM) settings of RG1 C. From the Azure portal, modify the Access control (IAM) settings of VM1 D. From the Azure portal, modify the Policies settings of RG1
A. From the Azure portal, modify the Managed Identity settings of VM1 Explanation: The question asks how to allow a service running on VM1 to manage resources in RG1 using VM1's identity. The most direct and efficient approach is to enable a managed identity for VM1. This provides VM1 with an automatically managed identity in Azure Active Directory, allowing it to authenticate to other Azure services without requiring credentials embedded in the service's code. Once the managed identity is enabled, you would then grant the identity appropriate permissions within the resource group RG1 (through IAM settings). Enabling the managed identity is the crucial first step. Why other options are incorrect: * **B. From the Azure portal, modify the Access control (IAM) settings of RG1:** This is a necessary step *after* enabling the managed identity on VM1. You need to grant the managed identity of VM1 the appropriate permissions within RG1 to perform the resource management actions. * **C. From the Azure portal, modify the Access control (IAM) settings of VM1:** This option is not directly relevant to the problem. Modifying VM1's IAM settings would only control access *to* the VM itself, not its ability to manage resources *in* RG1. * **D. From the Azure portal, modify the Policies settings of RG1:** This is a broader level of control than is necessary here. Policies manage resource compliance and governance at a higher level, while managed identities provide specific access permissions. Note: The provided discussion only presents one perspective (option A), and does not represent a consensus. While the explanation above explains why A is the most efficient and direct approach, alternative strategies involving IAM settings might technically achieve the same end goal after enabling the managed identity.
329
[View Question](https://www.examtopics.com/discussions/databricks/view/41443-exam-az-104-topic-4-question-52-discussion/) You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. Deployment Center in Azure App Service B. A Desired State Configuration (DSC) extension C. the New-AzConfigurationAssignment cmdlet D. a Microsoft Intune device configuration profile
B. A Desired State Configuration (DSC) extension. A Desired State Configuration (DSC) extension allows you to manage the configuration of your virtual machines in a virtual machine scale set. It ensures that the desired state (in this case, having NGINX installed) is maintained across all VMs. DSC allows for consistent and automated configuration management. Why other options are incorrect: * **A. Deployment Center in Azure App Service:** This is used for deploying web applications, not for configuring the base operating system of VMs. * **C. the New-AzConfigurationAssignment cmdlet:** This cmdlet is related to Azure Automation DSC, but it's not the direct method for ensuring NGINX installation on all VMs within a scale set during deployment. The DSC extension is integrated within the deployment process itself. * **D. a Microsoft Intune device configuration profile:** Intune is primarily used for managing devices in a corporate environment, often for mobile devices and laptops. It's not the appropriate tool for configuring server VMs during deployment via ARM templates. Note: The discussion highlights that another acceptable answer could be "Azure Custom Script Extension". This reflects some ambiguity or potential variations in the exam question itself. However, based on the provided information, a DSC extension is a more commonly cited and arguably more appropriate solution for this type of configuration management task.
330
**** [View Question](https://www.examtopics.com/discussions/databricks/view/41444-exam-az-104-topic-7-question-2-discussion/) You need to meet the user requirement for Admin1. What should you do? A. From the Azure Active Directory blade, modify the Groups B. From the Azure Active Directory blade, modify the Properties C. From the Subscriptions blade, select the subscription, and then modify the Access control (IAM) settings D. From the Subscriptions blade, select the subscription, and then modify the Properties **
** D. From the Subscriptions blade, select the subscription, and then modify the Properties **Explanation:** The user requirement is to designate "Admin1" as a service admin for the Azure subscription. Service administrator roles are managed at the subscription level, not within Azure Active Directory (Azure AD) groups or individual user properties. Therefore, accessing the subscription's properties blade is the correct approach to modify the service administrator assignment. Options A, B, and C are incorrect because they address different aspects of Azure management that are not relevant to assigning a service administrator at the subscription level. **Why other options are incorrect:** * **A. From the Azure Active Directory blade, modify the Groups:** Azure AD groups manage user access to resources, but service administrator roles are defined at the subscription level, not within groups. * **B. From the Azure Active Directory blade, modify the Properties:** This would modify the properties of the user account itself, not the service administrator role at the subscription level. * **C. From the Subscriptions blade, select the subscription, and then modify the Access control (IAM) settings:** While IAM settings manage access control, service administrator assignment is typically handled directly via subscription properties, not through granular IAM roles. Although IAM could be indirectly used, it is not the most direct and efficient method. There is no conflicting information or disagreement presented in the provided discussion.
331
[View Question](https://www.examtopics.com/discussions/databricks/view/41639-exam-az-104-topic-14-question-2-discussion/) You are planning the move of App1 to Azure. You create a network security group (NSG). You need to recommend a solution to provide users with access to App1. What should you recommend? A. Create an incoming security rule for port 443 from the Internet. Associate the NSG to the subnet that contains the web servers. B. Create an outgoing security rule for port 443 from the Internet. Associate the NSG to the subnet that contains the web servers. C. Create an incoming security rule for port 443 from the Internet. Associate the NSG to all the subnets. D. Create an outgoing security rule for port 443 from the Internet. Associate the NSG to all the subnets.
A. Create an incoming security rule for port 443 from the Internet. Associate the NSG to the subnet that contains the web servers. Explanation: To allow users to access App1 (presumably a web application using HTTPS), you need to allow incoming traffic on port 443 (the HTTPS port). Option A correctly specifies an incoming security rule for port 443 from the internet, targeting only the subnet containing the web servers. This is the most secure approach because it limits access to only the necessary resources. Why other options are incorrect: * **B:** Outgoing rules control traffic leaving the subnet, not traffic entering. Users need *incoming* access to App1. * **C:** Applying the NSG to all subnets is unnecessarily broad and poses a security risk. It opens port 443 to all subnets, not just those containing the web servers. * **D:** This option is incorrect for the same reasons as B and C. It's both an outgoing rule and overly broad in its scope. Note: The discussion includes several comments from users expressing varying levels of difficulty with the question and the exam in general, reflecting some disagreement on the question's clarity or suitability. However, the consensus points to option A as the correct answer.
332
[View Question](https://www.examtopics.com/discussions/databricks/view/41752-exam-az-104-topic-6-question-18-discussion/) You have an Azure virtual machine named VM1. Azure collects events from VM1. You are creating an alert rule in Azure Monitor to notify an administrator when an error is logged in the System event log of VM1. Which target resource should you monitor in the alert rule? A. virtual machine extension B. virtual machine C. metric alert D. Azure Log Analytics workspace
D. Azure Log Analytics workspace The correct answer is D because Azure Monitor collects logs from the virtual machine and sends them to a Log Analytics workspace. The alert rule is based on these logs, therefore the Log Analytics workspace is the target resource monitored for the alert. The alert is triggered from the log data within the workspace. Why other options are incorrect: * **A. virtual machine extension:** While the Azure Monitor agent (which is *not* a virtual machine extension, but often installed as one) is needed to collect the logs from the VM, the alert rule itself isn't triggered by monitoring the extension itself, but rather the data collected *by* the agent and stored in the workspace. * **B. virtual machine:** The VM is the source of the logs, not the target for the alert rule. The alert is triggered by the data *in* the workspace, not directly by the state of the VM. * **C. metric alert:** Metric alerts are for monitoring performance counters; this question is about monitoring event logs, which are not metrics. Note: The discussion shows some disagreement regarding the precise steps involved in setting up the monitoring (e.g., the role of the Azure Monitor agent and Data Collection Rules). However, the consensus points to the Log Analytics Workspace as the resource that should be monitored to trigger the alert rule based on the collected event logs.
333
[View Question](https://www.examtopics.com/discussions/databricks/view/43815-exam-az-104-topic-2-question-40-discussion/) You have an Azure DNS zone named adatum.com. You need to delegate a subdomain named research.adatum.com to a different DNS server in Azure. What should you do? A. Create an NS record named research in the adatum.com zone. B. Create a PTR record named research in the adatum.com zone. C. Modify the SOA record of adatum.com. D. Create an A record named *.research in the adatum.com zone.
The correct answer is **A. Create an NS record named research in the adatum.com zone.** To delegate a subdomain (research.adatum.com) to a different DNS server, you need to create a Name Server (NS) record within the parent zone (adatum.com). The NS record specifies the authoritative name servers for the subdomain. This tells other DNS servers where to look for information about the research.adatum.com subdomain. Why other options are incorrect: * **B. Create a PTR record named research in the adatum.com zone.** PTR records are used for reverse DNS lookups (IP address to hostname), not for delegating subdomains. * **C. Modify the SOA record of adatum.com.** The Start of Authority (SOA) record contains information about the zone itself (e.g., primary name server, serial number), not about delegating subdomains. Modifying this would affect the entire adatum.com zone, not just the research subdomain. * **D. Create an A record named *.research in the adatum.com zone.** A records map hostnames to IP addresses. This wouldn't delegate the subdomain to a different DNS server. The `*.research` wildcard would apply to all subdomains under `research.adatum.com`, but still wouldn't handle delegation. Note: While the discussion shows overwhelming agreement on answer A, there's a minor point raised about whether the NS record needs to be *created* or *edited*, implying it might already exist. The core concept of using an NS record for delegation remains correct.
334
[View Question](https://www.examtopics.com/discussions/databricks/view/43929-exam-az-104-topic-5-question-61-discussion/) You have an Azure subscription. You plan to deploy an Azure Kubernetes Service (AKS) cluster to support an app named App1. On-premises clients connect to App1 by using the IP address of the pod. For the AKS cluster, you need to choose a network type that will support App1. What should you choose? A. kubenet B. Azure Container Networking Interface (CNI) C. Hybrid Connection endpoints D. Azure Private Link
The correct answer is **B. Azure Container Networking Interface (CNI)**. Azure CNI provides pods with their own IP addresses, allowing on-premises clients to connect directly using the pod's IP address. This is because CNI gives each pod an IP address from a subnet in the virtual network that your AKS cluster is in. Other options are incorrect because: * **A. kubenet:** While kubenet is a networking solution for AKS, it doesn't directly provide pods with individually routable IP addresses in the same way CNI does. Kubenet uses the virtual network's IP address space less efficiently than CNI. Therefore, it would not easily support connecting directly to pods via their IP addresses. * **C. Hybrid Connection endpoints:** Hybrid Connections are used for connecting on-premises applications to Azure services, not for directly managing pod networking within an AKS cluster. * **D. Azure Private Link:** Azure Private Link provides private connectivity to Azure services, but is not relevant to connecting on-premises clients directly to pods within an AKS cluster. Note: While the overwhelming consensus points to B as the correct answer, one commenter mentions that directly connecting to pods by IP address is generally considered bad practice in Kubernetes. Best practices advocate for using Kubernetes Services to abstract away the underlying pod IPs. This should be considered in a real-world deployment scenario, even though B remains the technically correct answer to the provided question.
335
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51233-exam-az-104-topic-4-question-54-discussion/) You have the App Service plan shown in the following exhibit. (Image depicting App Service Plan details omitted as not directly included in the text). The scale-in settings for the App Service plan are configured as shown in the following exhibit. (Image depicting Scale-in settings omitted as not directly included in the text). The scale-out rule is configured with the same duration and cool-down time as the scale-in rule. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. **Hot Area:** (Image depicting a Hot Area for filling in answers, omitted as not directly included in the text) **Box 1:** [Dropdown Menu] **Box 2:** [Dropdown Menu] **
** The provided text gives conflicting answers for Box 1 and Box 2. The "Suggested Answer" image is not included in the provided text, preventing a definitive answer. mlantonis suggests 2 for Box 1 and 4 for Box 2, based on their interpretation of the unspecified images. Their reasoning involves calculating scale-out based on CPU usage exceeding thresholds (70% for 1 hour, 90% for 5 minutes) and scale-in based on CPU usage dropping below 30% considering the 5-minute cool-down period. However, without the actual images depicting the App Service Plan and scale-in/out settings, it's impossible to verify the accuracy of this interpretation. The lack of image data prevents a fully accurate response. **Why other options are incorrect (cannot be definitively determined):** Because the images containing crucial information are missing, it is impossible to evaluate the correctness of other potential answers. The provided discussion contains only one user's interpretation, and this interpretation lacks the necessary visual context to be confirmed or refuted.
336
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51238-exam-az-104-topic-5-question-56-discussion/) You manage two Azure subscriptions named Subscription1 and Subscription2. Subscription1 has the following virtual networks: VNET1 (10.10.10.0/24), VNET2 (172.16.0.0/16). Subscription2 contains the following virtual network: VNETA (10.10.128.0/17). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 1. You can create a VNet-to-VNet VPN connection between VNET1 and VNET2. 2. You can create VNet peering between VNET1 and VNET2. 3. You can create VNet peering between VNET1 and VNETA. **
** 1. **No.** A VNet-to-VNet VPN connection requires a gateway subnet in each virtual network. VNET1 (10.10.10.0/24) lacks sufficient address space to accommodate a gateway subnet. 2. **Yes.** VNet peering only requires that the VNets do not have overlapping IP address spaces. VNET1 (10.10.10.0/24) and VNET2 (172.16.0.0/16) do not overlap. 3. **Yes.** VNet peering only requires that the VNets do not have overlapping IP address spaces. VNET1 (10.10.10.0/24) and VNETA (10.10.128.0/17) do not overlap. **WHY OTHER OPTIONS ARE INCORRECT:** The discussion highlights the requirement of a gateway subnet for VNet-to-VNet VPN connections. The size of VNET1's address space is insufficient to allocate a gateway subnet in addition to its existing subnet. Therefore, a VNet-to-VNet VPN connection between VNET1 and VNET2 is not possible. The other two options are correct because VNet Peering's only constraint is the absence of IP address space overlap. **NOTE:** While the suggested answer is consistent with the explanation provided by mlantonis, efd324e's comment also correctly points out the gateway subnet requirement for site-to-site VPN connections. The question focuses on VNet-to-VNet VPN which is slightly different from site-to-site VPN but the underlying requirement for a gateway subnet remains consistent.
337
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51251-exam-az-104-topic-2-question-39-discussion/) You have an Azure subscription that contains a resource group named TestRG. You use TestRG to validate an Azure deployment. TestRG contains the following resources: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0009100001.png) You need to delete TestRG. What should you do first? A. Modify the backup configurations of VM1 and modify the resource lock type of VNET1 B. Remove the resource lock from VNET1 and delete all data in Vault1 C. Turn off VM1 and remove the resource lock from VNET1 D. Turn off VM1 and delete all data in Vault1 **
** B. Remove the resource lock from VNET1 and delete all data in Vault1 **Explanation:** The image shows that VNET1 has a resource lock and Vault1 contains data. Before deleting the resource group TestRG, you must remove any locks preventing deletion and delete any data within the resources to be deleted. Attempting to delete the resource group without removing the lock on VNET1 will fail. Deleting data within Vault1 is necessary before deletion as well. Therefore, option B correctly identifies the necessary pre-delete actions. **Why other options are incorrect:** * **A:** Modifying backup configurations doesn't address the resource lock on VNET1, which prevents deletion. The question asks what to do *first*, and removing the lock is the priority. * **C:** Turning off VM1 is unnecessary before deleting the resource group. While good practice for some scenarios, this action doesn't address the lock on VNET1 or the data in Vault1. This is not the first step. * **D:** Similar to C, deleting data in Vault1 is important, but removing the lock on VNET1 must happen first to enable the deletion of the resource group. **Note:** The discussion indicates a consensus on the correct answer.
338
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51270-exam-az-104-topic-2-question-37-discussion/) You have an Azure Active Directory (Azure AD) tenant that contains three global administrators named Admin1, Admin2, and Admin3. The tenant is associated with an Azure subscription. Access control for the subscription is configured as shown in the Access control exhibit. You sign in to the Azure portal as Admin1 and configure the tenant as shown in the Tenant exhibit. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. **Image 1 (Access Control Exhibit):** *(Image not provided, context from discussion needed)* **Image 2 (Tenant Exhibit):** *(Image not provided, context from discussion needed)* **Image 3 (Hot Area):** *(Image not provided, but based on discussion shows a table with two statements to evaluate as Yes/No.)* Statement 1: Admin1 is a User Access Administrator for the subscription. Statement 2: Admin3 is an Owner for the subscription. **
** Statement 1: **Yes**. Admin1, as a Global Administrator in Azure AD, has elevated access and implicitly inherits the User Access Administrator role at the subscription level in Azure RBAC. This allows them to manage access to the subscription. Statement 2: **Yes**. The discussion explicitly states that Admin3 is the Owner of the subscription. This is a valid Azure RBAC assignment granting full control over the subscription. **Explanation:** Azure AD roles (like Global Administrator) and Azure RBAC roles (like Owner, User Access Administrator) are independent. A Global Administrator in Azure AD doesn't automatically get all Azure RBAC roles. However, a Global Administrator can *elevate* their access to manage Azure RBAC roles, effectively becoming a User Access Administrator. Therefore, Admin1's Global Administrator role allows them to manage access (including assigning Owner roles) at the subscription level, thus making them a User Access Administrator. Admin3's Owner role is explicitly stated and needs no further explanation. **Why other options are incorrect (not applicable in this case):** There are no other options provided besides "Yes" or "No" for each statement. The question is designed to assess understanding of the relationship between Azure AD roles and Azure RBAC roles and how elevated access functions. The discussion doesn't suggest alternative interpretations. **Note:** The provided context lacks the actual images displaying the Access Control and Tenant exhibits. Therefore, the answer relies entirely on the information within the discussion. If the images were provided they might show conflicting or additional information.
339
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51272-exam-az-104-topic-3-question-32-discussion/) You have an Azure subscription that contains the storage accounts shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0019600001.png) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0019700001.png) **
** * **Box 1: contoso104 only** Premium file shares are only available in FileStorage accounts. The image shows that only contoso104 is a FileStorage account. * **Box 2: contoso101 and contoso103 only** Object storage data tiering (hot, cool, archive) is supported in Blob Storage and General Purpose v2 (GPv2) accounts. General Purpose v1 (GPv1) accounts do not support tiering. The image shows contoso101 and contoso103 are GPv2 accounts. Contoso102 is a GPv1 account, and contoso104 is a FileStorage account, neither of which support this feature. **Why other options are incorrect:** Any option that doesn't select contoso104 for Box 1 is incorrect because only FileStorage accounts support premium file shares. Similarly, any option that includes contoso102 or contoso104 in Box 2 is incorrect because they don't support object storage data tiering. There is no evidence of disagreement or conflicting opinions within the provided content.
340
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51274-exam-az-104-topic-2-question-41-discussion/) DRAG DROP - You have an Azure Active Directory (Azure AD) tenant that has the contoso.onmicrosoft.com domain name. You have a domain name of contoso.com registered at a third-party registrar. You need to ensure that you can create Azure AD users that have names containing a suffix of @contoso.com. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0009300001.jpg) *(Image contains a drag-and-drop interface with options like "Add the custom domain name to your directory", "Add a DNS entry for the domain name at the domain name registrar", and "Verify the custom domain name in Azure AD".)* **
** The correct sequence of actions is: 1. **Add the custom domain name to your directory:** This adds `contoso.com` as a verified custom domain within your Azure AD tenant. This allows Azure AD to recognize and accept user accounts with the `@contoso.com` suffix. 2. **Add a DNS entry for the domain name at the domain name registrar:** This involves adding specific DNS records (like MX, TXT, and CNAME records) to your DNS settings at your third-party registrar (where `contoso.com` is registered). These records prove to Azure AD that you control the `contoso.com` domain. 3. **Verify the custom domain name in Azure AD:** After adding the DNS records, Azure AD needs to verify them. This is done through the Azure portal where Azure AD checks that the DNS records exist and are properly configured, confirming domain ownership. **Why other options are incorrect (implied by the process):** Performing the steps in any other order would fail because each step is dependent on the previous one. You cannot verify the domain in Azure AD before adding the domain and the corresponding DNS records. Similarly, the verification process requires the correct DNS records to be already in place. **Note:** While the discussion shows overwhelming agreement on the answer, there is some variation in wording. The core steps remain consistent.
341
[View Question](https://www.examtopics.com/discussions/databricks/view/51280-exam-az-104-topic-4-question-39-discussion/) **** HOTSPOT - You have an Azure subscription named Subscription1. In Subscription1, you create an alert rule named Alert1. The Alert1 action group is configured as shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0028100001.jpg) Alert1 alert criteria triggered every minute. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0028200001.png) **
** * **Box 1: 60** One alert per minute will trigger one email per minute, up to the email rate limit of 100 emails per hour. The question implies a single alert rule (Alert1), and the provided information states that emails are not rate-limited until 100 are sent within an hour. Therefore, 60 emails could be sent in an hour if an alert is triggered every minute. * **Box 2: 12** No more than 1 SMS every 5 minutes can be sent, which equals 12 per hour (60 minutes / 5 minutes/SMS = 12 SMS/hour). **Explanation:** The question tests understanding of Azure alert action group rate limiting. The provided text clearly states the rate limits for different notification methods. The calculation for SMS is straightforward: 60 minutes in an hour divided by the 5-minute limit per SMS equals 12 SMS messages per hour. The email limit is 100 per hour; therefore 60 emails can be sent if there is one trigger per minute. **Why other options are incorrect:** The discussion mentions the possibility of "Alert2," which is not included in the original question. The suggested answer of 0 for Box 2 is incorrect based on the provided rate limits in the question and accompanying text. The only plausible option for Box 2, given the context of the question and information provided, is 12. There's no ambiguity introduced to justify a response of 0 for Box 2. The disagreement in the discussion stems from an assumed ambiguity not present in the original question itself.
342
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51301-exam-az-104-topic-3-question-1-discussion/) You have an Azure subscription named Subscription1 that contains the storage accounts shown in the following table: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0014000001.png) You plan to use the Azure Import/Export service to export data from Subscription1. You need to identify which storage account can be used to export the data. What should you identify? A. storage1 B. storage2 C. storage3 D. storage4 **
** D. storage4 The Azure Import/Export service, for export operations, only supports Azure Blob storage. Based on the provided image (which is missing from this text transcription, but implied by the question), only storage4 is a Blob storage account. Therefore, only storage4 can be used for data export with the Azure Import/Export service. **Why other options are incorrect:** * **A, B, and C:** The question and discussion imply these storage accounts are not of the Blob storage type required for export using the Azure Import/Export service. The exact types are not explicitly provided in the image or text, but the correct answer relies on only Blob Storage being exportable. **Note:** The discussion provides slightly differing information regarding supported storage types. While both sources agree that Blob storage is required for export, there is a slight variance in the detail of Blob storage types. The answer provided takes the more restrictive interpretation to ensure correctness based on the available information.
343
[View Question](https://www.examtopics.com/discussions/databricks/view/51306-exam-az-104-topic-2-question-30-discussion/) You have an Azure subscription linked to an Azure Active Directory tenant. The tenant includes a user account named User1. You need to ensure that User1 can assign a policy to the tenant root management group. What should you do? A. Assign the Owner role for the Azure Subscription to User1, and then modify the default conditional access policies. B. Assign the Owner role for the Azure subscription to User1, and then instruct User1 to configure access management for Azure resources. C. Assign the Global administrator role to User1, and then instruct User1 to configure access management for Azure resources. D. Create a new management group and delegate User1 as the owner of the new management group.
C. Assign the Global administrator role to User1, and then instruct User1 to configure access management for Azure resources. The Global administrator role has the highest level of permissions in Azure Active Directory. This role allows User1 to manage all aspects of the Azure tenant, including assigning policies to the root management group. Options A, B, and D provide insufficient permissions. Assigning the Owner role to a subscription (A and B) only grants control over that *specific* subscription, not the entire tenant's root management group. Creating a new management group (D) would not grant User1 control over the *root* management group. Why other options are incorrect: * **A:** Modifying conditional access policies doesn't grant the permission to assign policies to the root management group. Conditional Access policies control access to resources based on conditions, not the ability to assign management group policies. * **B:** Assigning the Owner role at the subscription level is insufficient for managing policies at the root management group level. * **D:** Creating a new management group only gives User1 ownership of that *new* group, not the root. Note: The provided discussion only shows one user's opinion confirming answer C. There is no opposing viewpoint presented.
344
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51316-exam-az-104-topic-5-question-46-discussion/) You have Azure virtual machines that run Windows Server 2019 and are configured as shown in the following table. | VM Name | IP Address | DNS Servers | |---|---|---| | VM1 | 10.0.0.10 | 10.0.0.1, 168.63.129.16 | | VM2 | 10.0.0.20 | 10.0.0.1, 168.63.129.16 | You create a public Azure DNS zone named adatum.com and a private Azure DNS zone named contoso.com. For contoso.com, you create a virtual network link named link1 as shown in the exhibit. You discover that VM1 can resolve names in contoso.com but cannot resolve names in adatum.com. VM1 can resolve other hosts on the Internet. You need to ensure that VM1 can resolve host names in adatum.com. What should you do? A. Update the DNS suffix on VM1 to be adatum.com B. Configure the name servers for adatum.com at the domain registrar C. Create an SRV record in the contoso.com zone D. Modify the Access control (IAM) settings for link1 **
** B. Configure the name servers for adatum.com at the domain registrar. **Explanation:** The question states that `adatum.com` is a *public* Azure DNS zone. For a public DNS zone to be resolvable from the internet, the authoritative name servers for that zone must be registered with the top-level domain (TLD) registrar (like GoDaddy, Namecheap, etc.). This allows the root DNS servers and other recursive resolvers to know where to find the authoritative DNS records for `adatum.com`. Azure DNS only manages the DNS records; it doesn't handle domain name registration. **Why other options are incorrect:** * **A. Update the DNS suffix on VM1 to be adatum.com:** This would only affect how VM1 appends suffixes to unqualified hostnames. It wouldn't solve the problem of the DNS servers not knowing where to find the `adatum.com` zone's records. * **C. Create an SRV record in the contoso.com zone:** SRV records are used for service location, not for resolving hostnames in a different zone. * **D. Modify the Access control (IAM) settings for link1:** This only relates to the private DNS zone `contoso.com` and has no bearing on the public zone `adatum.com`. **Note:** There is some discussion in the original forum regarding whether the question clearly indicates `adatum.com` as a *public* zone. While the question states it is public, the ambiguity is acknowledged here. The provided answer is based on the assumption that `adatum.com` is indeed a public zone.
345
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51321-exam-az-104-topic-5-question-58-discussion/) You have an app named App1 that is installed on two Azure virtual machines named VM1 and VM2. Connections to App1 are managed by using an Azure Load Balancer. The effective network security configurations for VM2 are shown in the following exhibit. ``` (Image showing network security rules with the following details visible): Name Priority Direction Protocol Port Source Destination Allow_131.107.100.50 100 Inbound TCP 443 131.107.100.50 * BlockAllOther443 200 Inbound TCP 443 * * DefaultAllowRDP 65000 Inbound TCP 3389 * * DefaultAllowHTTPS 65001 Inbound TCP 443 * * ``` You discover that connections to App1 from 131.107.100.50 over TCP port 443 fail. You verify that the Load Balancer rules are configured correctly. You need to ensure that connections to App1 can be established successfully from 131.107.100.50 over TCP port 443. Solution: You delete the BlockAllOther443 inbound security rule. Does this meet the goal? A. Yes B. No **
** B. No Deleting the `BlockAllOther443` rule does not guarantee successful connections. While this rule *appears* to block connections from all sources except 131.107.100.50, the `Allow_131.107.100.50` rule has higher priority (100 vs 200). Therefore, traffic from 131.107.100.50 is already allowed. The failure is likely due to other factors, such as the VMs being offline, problems with the Load Balancer health probes, or issues on the application itself. The discussion highlights a disagreement on whether deleting `BlockAllOther443` would resolve the issue. Some argue it might fix health probe issues impacting Load Balancer functionality (making A potentially correct). **Why other options are incorrect:** A. While some users in the discussion suggested that deleting `BlockAllOther443` might resolve the issue by enabling health probes, this is not guaranteed by the provided information. The primary issue is likely not the blocking rule itself but rather something else entirely. The fact that the higher priority rule *allows* the connection already makes this option highly improbable. **Note:** The discussion shows conflicting opinions on the correct answer. While the highest voted answer is B, there is a dissenting viewpoint suggesting that the health probes of the load balancer might be affected by the `BlockAllOther443` rule. Therefore, the definitive answer may depend on additional context that is not fully provided.
346
[View Question](https://www.examtopics.com/discussions/databricks/view/51333-exam-az-104-topic-3-question-29-discussion/) DRAG DROP - You have an Azure subscription that contains an Azure file share. You have an on-premises server named Server1 that runs Windows Server 2016. You plan to set up Azure File Sync between Server1 and the Azure file share. You need to prepare the subscription for the planned Azure File Sync. Which two actions should you perform in the Azure subscription? To answer, drag the appropriate actions to the correct targets. Each action may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0019100001.png) Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://img.examtopics.com/az-104/image906.png)
The two actions that should be performed in the Azure subscription to prepare for Azure File Sync are: 1. **Create a Storage Sync Service** and 2. **Create a Sync Group**. These are necessary Azure resources for Azure File Sync to function. A Storage Sync Service acts as a control plane managing sync groups, and a Sync Group defines the relationship between the on-premises server and the Azure file share, enabling synchronization. The suggested answer image (which shows these two options) supports this. The discussion overwhelmingly agrees on this solution. Other options are not explicitly stated in the question's image but were suggested and then ruled out by the consensus in the discussion. The question specifically asks for actions *within* the Azure subscription, excluding actions on the on-premises server (Server1).
347
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51335-exam-az-104-topic-5-question-32-discussion/) You have an Azure subscription that contains three virtual networks named VNET1, VNET2, and VNET3. Peering for VNET1, VNET2, and VNET3 is configured as shown in the following exhibits: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0039700001.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0039700002.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0039800001.png) How can packets be routed between the virtual networks? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0039800002.png) **
** Box 1: VNET2 and VNET3 Box 2: VNET1 Explanation: Because gateway transit is disabled, communication is limited to direct peering connections. VNET1 is peered with both VNET2 and VNET3. Therefore, VNET2 and VNET3 can communicate with each other *only* through their peering connection with VNET1. VNET1 can communicate with VNET2 and VNET3. **Why other options are incorrect:** The question is specifically asking about packet routing with gateway transit disabled. Any answer suggesting communication paths that rely on gateway transit would be incorrect based on the given configuration. **Note:** The discussion shows consensus on the answer.
348
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51339-exam-az-104-topic-3-question-30-discussion/) You have an Azure subscription that contains the file shares shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0019200001.png) You have the on-premises file shares shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0019200002.png) You create an Azure file sync group named Sync1 and perform the following actions: ✑ Add share1 as the cloud endpoint for Sync1. ✑ Add data1 as a server endpoint for Sync1. ✑ Register Server1 and Server2 to Sync1. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0019300001.jpg) **
** Box 1: No. A sync group requires one cloud endpoint (an Azure file share) and one or more server endpoints. While share1 is correctly added as a cloud endpoint, the statement doesn't inherently specify whether the necessary server endpoints exist. Box 2: Yes. Data2 resides on Server2, which is registered with Sync1. Therefore, data2 can be synchronized. Box 3: No. Data3 is located on Server3, which is not registered with Sync1. Therefore, it cannot be synchronized. **Why other options are incorrect:** The provided suggested answer and the discussion clearly indicate why each statement is true or false based on the requirements of Azure File Sync. The consensus in the discussion supports the answers. Note that there was some initial confusion around the order of adding server endpoints and registering servers; however, the final consensus is that the registration of servers is the critical aspect for synchronization.
349
[View Question](https://www.examtopics.com/discussions/databricks/view/51348-exam-az-104-topic-6-question-5-discussion/) You have an Azure subscription that contains the identities shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0053500001.png) User1, Principal1, and Group1 are assigned the Monitoring Reader role. An action group named AG1 has the Email Azure Resource Manager Role notification type and is configured to email the Monitoring Reader role. You create an alert rule named Alert1 that uses AG1. You need to identify who will receive an email notification when Alert1 is triggered. Who should you identify? A. User1 and Principal1 only B. User1, User2, Principal1, and Principal2 C. User1 only D. User1 and User2 only
The correct answer is **C. User1 only**. Explanation: The question specifies that the action group is configured to email the *Monitoring Reader role* using the "Email Azure Resource Manager Role" notification type. While Group1 is assigned the Monitoring Reader role, and User2 is a member of Group1, the email notification is sent only to the *individual users and not groups* directly assigned the role. Therefore, only User1 (directly assigned the Monitoring Reader role) will receive the email. Service principals (like Principal1 and Principal2) also do not receive emails via this method. Why other options are incorrect: * **A. User1 and Principal1 only:** Incorrect because service principals (Principal1) do not receive emails through this type of Azure Resource Manager role-based notification. * **B. User1, User2, Principal1, and Principal2:** Incorrect because it includes both group members and service principals, neither of which receive emails in this scenario. * **D. User1 and User2 only:** Incorrect because User2 is a member of Group1 and does not directly possess the Monitoring Reader role; therefore, they will not receive the email notification. Note: There is disagreement in the provided discussion regarding the correct answer. Some argue that User2 should receive the email due to group membership inheritance. However, based on the provided Microsoft documentation links in the discussion and the general behavior of Azure Resource Manager role-based email notifications, the most accurate answer is that only User1 will receive the email.
350
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51363-exam-az-104-topic-5-question-67-discussion/) You have an Azure subscription that contains two virtual machines as shown in the following table. | VM Name | IP Address | |---|---| | VM1 | 10.0.0.4 | | VM2 | 10.0.0.5 | You perform a reverse DNS lookup for 10.0.0.4 from VM2. Which FQDN will be returned? A. vm1.core.windows.net B. vm1.azure.com C. vm1.westeurope.cloudapp.azure.com D. vm1.internal.cloudapp.net **
** D. vm1.internal.cloudapp.net The discussion and provided ping test results clearly indicate that a reverse DNS lookup of the IP address 10.0.0.4 resolves to `vm1.internal.cloudapp.net`. The ping command from VM2 successfully identifies the hostname as `vmtest1.internal.cloudapp.net` (which is consistent with the naming convention; the test uses `vmtest1` while the question uses `vm1`). This strongly suggests that Azure's internal DNS system maps the private IP address to this FQDN. **Why other options are incorrect:** * **A. vm1.core.windows.net:** This is a common naming convention for Windows VMs, but it's not the FQDN returned by the internal DNS in this scenario. * **B. vm1.azure.com:** This is a general Azure domain, but not the specific FQDN used for internal resolution of private IP addresses within the virtual network. * **C. vm1.westeurope.cloudapp.azure.com:** This format suggests a public IP address and cloudapp.azure.com domain which is not relevant in this internal network scenario. **Note:** The answer is based on the provided discussion and ping test results. There is no conflicting opinion presented in the provided text. The consensus is that the correct answer is D.
351
[View Question](https://www.examtopics.com/discussions/databricks/view/51378-exam-az-104-topic-5-question-52-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0043400002.png) You deploy a load balancer that has the following configurations: ✑ Name: LB1 ✑ Type: Internal ✑ SKU: Standard ✑ Virtual network: VNET1 You need to ensure that you can add VM1 and VM2 to the backend pool of LB1. Solution: You create two Standard SKU public IP addresses and associate a Standard SKU public IP address to the network interface of each virtual machine. Does this meet the goal? A. Yes B. No
A. Yes The solution meets the goal. While the provided solution mentions creating and associating Standard SKU public IP addresses to VM1 and VM2, this is not strictly necessary for adding them to the backend pool of an *internal* load balancer (LB1). An internal load balancer only requires the VMs to be in the same virtual network (VNET1) as the load balancer and to have either a standard SKU public IP address or no public IP address at all. The VMs already exist in the same virtual network (VNET1) as the load balancer, therefore, creating the public IP addresses is an unnecessary step. The VMs can communicate with the internal load balancer without public IP addresses, using private IP addresses within VNET1. Option B is incorrect because the VMs can be added to the backend pool of the internal load balancer, regardless of whether they have public IPs, as long as they are within the same VNET. Note: The discussion highlights some disagreement on whether public IPs are strictly required. The consensus leans towards them not being required if the VMs are in the same VNET as the internal load balancer. However, the provided solution suggests their use which is unnecessary, but not incorrect.
352
[View Question](https://www.examtopics.com/discussions/databricks/view/51379-exam-az-104-topic-4-question-55-discussion/) You have an Azure virtual machine named VM1 that runs Windows Server 2019. The VM was deployed using default drive settings. You sign in to VM1 as a user named User1 and perform the following actions: ✑ Create files on drive C. ✑ Create files on drive D. ✑ Modify the screen saver timeout. ✑ Change the desktop background. You plan to redeploy VM1. Which changes will be lost after you redeploy VM1? A. the modified screen saver timeout B. the new desktop background C. the new files on drive D D. the new files on drive C
C. The new files on drive D will be lost after redeploying VM1. Explanation: In Windows Server, the D: drive typically represents a temporary disk. Data on temporary disks is not persistent and is cleared when the VM is redeployed. Files created on the C: drive (the primary operating system drive) will persist. Similarly, changes to the system settings like screen saver timeout and desktop background are usually stored in persistent configuration files. Why other options are incorrect: * **A and B are incorrect:** Changes to the screen saver timeout and desktop background are configuration settings typically stored persistently and survive a redeployment. * **D is incorrect:** Drive C: is the primary operating system drive, and files stored there persist across redeployments. Note: The discussion only provides confirmation of the correct answer. There is no disagreement expressed on the underlying reasoning.
353
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51385-exam-az-104-topic-5-question-49-discussion/) You have an Azure subscription that contains the resource groups shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0042900001.png) *(Image shows a table with Resource Group names: RG1, RG2)* RG1 contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0042900002.png) *(Image shows a table with resources in RG1: IP1, VNET1, Storage1, VM1)* You need to identify which resources you can move from RG1 to RG2, and which resources you can move from RG2 to RG1. Which resources should you identify? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0043000001.jpg) *(This image shows a multiple choice style question with boxes to select resources to move from RG1 to RG2 and RG2 to RG1)* **
** The correct answer is to move IP1, VNET1, and Storage1 from RG1 to RG2, and IP2, VNET2, and Storage2 from RG2 to RG1 (assuming these are the resources listed in the missing image showing resources within RG2). The explanation provided in the discussion states that locks prevent *updates or removal*, but moving a resource is neither of these actions. Therefore, resources can be moved despite the presence of locks (unless it is explicitly stated that moves are prevented by a specific lock type). **Why other options are incorrect:** The provided discussion does not offer alternative options. However, any option that suggests you *cannot* move resources based solely on the existence of locks (without further lock details) would be incorrect based on the discussion's explanation. There's a noted disagreement in the discussion regarding the effect of read-only locks on resource group movement. While one post claims it *prevents* movement, another clarifies it *does not*. The given answer assumes the latter interpretation, reflecting the most upvoted response.
354
[View Question](https://www.examtopics.com/discussions/databricks/view/51398-exam-az-104-topic-5-question-62-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0044800001.png) You deploy a load balancer that has the following configurations: ✑ Name: LB1 ✑ Type: Internal ✑ SKU: Standard ✑ Virtual network: VNET1 You need to ensure that you can add VM1 and VM2 to the backend pool of LB1. Solution: You disassociate the public IP address from the network interface of VM2. Does this meet the goal? A. Yes B. No
A. Yes Disassociating the public IP address from VM2 allows it to be added to the backend pool of the internal load balancer LB1. The key points are that the load balancer is internal (meaning it doesn't need public IPs on the VMs), it's a standard SKU (allowing individual VMs to be added), and both VMs are in the same virtual network (VNET1) as the load balancer. Removing the public IP from VM2 addresses a potential issue, ensuring it's consistent with the requirements of the internal load balancer. The solution doesn't explicitly state that it is *necessary* to remove the Public IP from VM2, only that it will *allow* VM2 to be added to the backend pool. Why other options are incorrect: B. No - This is incorrect because the action of disassociating the public IP from VM2 resolves a potential conflict, and will allow both VMs to be added to the backend pool of the internal load balancer. The VMs are in the same VNET as the load balancer, and the load balancer is a standard SKU. Note: The discussion indicates some disagreement or ambiguity around whether removing the public IP address is strictly necessary, rather than just potentially helpful in the process. The core requirement for successfully adding the VMs is that they reside in the same VNET as the internal load balancer.
355
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51434-exam-az-104-topic-4-question-53-discussion/) You deploy an Azure Kubernetes Service (AKS) cluster that has the network profile shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. **Image 1 (Network Profile):** (No text visible in the image itself, the question relies on the implied information within the image and its relation to the provided answer choices). **Image 2 (Hot Area):** (No text visible in this image, it is a graphic representing the answer boxes). **Image 3 (Suggested Answer):** Box 1: 10.244.0.0/16 Box 2: 10.0.0.0/16 **
** Box 1: **10.244.0.0/16** This represents the Pod CIDR. Pods are the fundamental units of Kubernetes deployment, and each container within a Pod needs an IP address. The Pod CIDR range provides these IP addresses. Box 2: **10.0.0.0/16** This represents the Service CIDR. Services in Kubernetes abstract and expose application functionality. They need IP addresses to be reachable internally within the cluster. The Service CIDR provides these addresses. **Explanation:** The provided suggested answer and the highly upvoted responses in the discussion support these selections. The `--pod-cidr` and `--service-cidr` values are key configuration parameters when setting up an AKS cluster and must be correctly assigned. The explanation notes that the Pod CIDR range should be large enough to accommodate the expected number of nodes. **Why other options are incorrect:** The discussion includes a comment suggesting the question is outdated and that the provided Container IP is incorrect (172.17.0.1/16). However, without seeing the originally presented options in the exam question, it's impossible to definitively address this. The provided suggested answer, however, aligns with the documentation and the highly upvoted comment within the discussion. The disagreement regarding outdated information should be noted. **Note:** The discussion indicates that this question may be outdated or no longer relevant to the AZ-104 exam. This should be considered when reviewing this question and answer.
356
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51446-exam-az-104-topic-5-question-44-discussion/) You have an Azure virtual machine named VM1 that connects to a virtual network named VNet1. VM1 has the following configurations: ✑ Subnet: 10.0.0.0/24 ✑ Availability set: AVSet ✑ Network security group (NSG): None ✑ Private IP address: 10.0.0.4 (dynamic) ✑ Public IP address: 40.90.219.6 (dynamic) You deploy a standard, Internet-facing load balancer named slb1. You need to configure slb1 to allow connectivity to VM1. Which changes should you apply to VM1 as you configure slb1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** The correct changes to apply to VM1 are: 1. **Change the private IP address of VM1 to static:** While not explicitly stated as a requirement in all discussions, a static IP address is generally recommended for load balancer integration to ensure consistent connectivity. Dynamic IPs can change, breaking the load balancer's configuration. 2. **Create and configure an NSG:** An NSG is crucial. Without one, inbound traffic to VM1 might be blocked, even if the load balancer is correctly configured. The NSG needs rules to allow the necessary traffic from the load balancer to reach VM1. **Why other options are incorrect (or not necessary):** * **Remove the public IP address from VM1:** This is incorrect. While a public IP address is *not required* for an internet-facing load balancer to direct traffic to a VM (the load balancer handles the public IP), removing the public IP address is unnecessary and might disrupt other services relying on direct public access to the VM. **Note on Disagreement:** There's a slight disagreement in the discussion regarding the IP address SKU (Basic vs. Standard) and its relation to the load balancer SKU. While matching SKUs are generally recommended, the core answer focuses on the necessity of a static private IP and an NSG for successful load balancer integration. The SKU matching is a best practice and might be context-dependent, not a strict requirement for passing the question.
357
[View Question](https://www.examtopics.com/discussions/databricks/view/51452-exam-az-104-topic-5-question-45-discussion/) You have an Azure subscription that contains the resources shown in the following table. | Resource Group | Resource Name | Location | Resource Type | |---|---|---|---| | RG1 | VNET1 | East US | Virtual Network | You need to create a network interface named NIC1. In which location can you create NIC1? A. East US and North Europe only B. East US only C. East US, West Europe, and North Europe D. East US and West Europe only
B. East US only The correct answer is B because a network interface (NIC) must be created in the same location as its associated virtual network. The table shows that VNET1, the only virtual network available, is located in "East US". Therefore, NIC1 can only be created in East US. Attempting to create a NIC in a location without a corresponding VNet will result in an error, as noted in the discussion. Why other options are incorrect: * **A. East US and North Europe only:** Incorrect because there is no virtual network in North Europe. * **C. East US, West Europe, and North Europe:** Incorrect because there are no virtual networks in West Europe or North Europe. * **D. East US and West Europe only:** Incorrect because there is no virtual network in West Europe. Note: The discussion highlights a potential ambiguity. While the question implies NIC1 should connect to VNET1, this is not explicitly stated. However, the consensus is that without additional context, the only possible location is East US, where the existing VNET is located.
358
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51459-exam-az-104-topic-5-question-43-discussion/) You have an Azure subscription that contains the resources in the following table. [Image of Table - Unable to display image content here. Please refer to original URL] You install the Web Server server role (IIS) on VM1 and VM2, and then add VM1 and VM2 to LB1. LB1 is configured as shown in the LB1 exhibit. [Image of LB1 Exhibit - Unable to display image content here. Please refer to original URL] Rule1 is configured as shown in the Rule1 exhibit. [Image of Rule1 Exhibit - Unable to display image content here. Please refer to original URL] For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. [Image of Hot Area - Unable to display image content here. Please refer to original URL] **
** Box 1: Yes Box 2: Yes Box 3: No **Explanation:** * **Box 1: Yes** - A Basic Load Balancer in Azure *does* support virtual machines within a single availability set or virtual machine scale set. This is explicitly stated in the provided text and the suggested answer. * **Box 2: Yes** - Health probes are a fundamental component of Azure Load Balancers. They are essential for monitoring the health of backend instances (VM1 and VM2 in this case). If a health probe fails, the load balancer correctly stops directing traffic to that unhealthy instance. The explanation in the provided text accurately describes the health probe functionality. * **Box 3: No** - The statement in Box 3 is incorrect. Given that VM1 and VM2 are added to LB1, and LB1 is configured with a load balancing rule, load balancing *will* occur between the two VMs. The disagreement in the discussion thread likely stems from a misunderstanding of Basic Load Balancer functionality or a misinterpretation of the provided diagram. The suggested answer and accompanying explanation correctly state that load balancing will occur. **Why other options are incorrect:** The provided text and suggested answer give clear reasoning for the "Yes" and "No" selections. There are no other options presented beyond the three boxes requiring "Yes" or "No" responses. The discussion confirms the validity of the suggested answer.
359
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51465-exam-az-104-topic-5-question-48-discussion/) You have an Azure subscription that contains Azure virtual machines (VMs) with network interface settings as shown below. From the VNET1 settings, you configure DNS servers as shown. The VMs can successfully connect to DNS servers 192.168.10.15 and 193.77.134.10. For each statement, select Yes if true, otherwise select No. **VM Information Table (Image Text):** (This would show a table detailing VM names and subnet information, but the image is not available here to reproduce.) **Network Interface Settings Table (Image Text):** (This would show a table detailing VM names, NIC names, and primary/secondary DNS server IP addresses. Again, the image is unavailable.) **VNET1 DNS Server Settings (Image Text):** (This would show an image of the DNS server settings for VNET1, listing the IP addresses 192.168.10.15 and 193.77.134.10. The image is unavailable.) **Statements:** 1. You can specify DNS server IP addresses in the VNet settings. The setting is applied as the default DNS server(s) for all VMs in the VNet. 2. The DNS server settings configured on the network interface of a VM cannot be overridden by DNS settings on the VNET. 3. You can set DNS servers per VM or cloud service to override the default network settings. **
** 1. **Yes:** You can configure DNS servers at the VNet level. These settings act as defaults for all VMs within that VNet. 2. **No:** The DNS settings on a VM's network interface take precedence over the VNet's default DNS settings. If a VM has specific DNS servers defined on its NIC, it will use those instead of the VNet's DNS servers. 3. **Yes:** It's possible to override the VNet's default DNS settings by configuring specific DNS servers on individual VMs or cloud services (through their network interfaces). **Explanation:** The question tests understanding of Azure DNS configuration and precedence. The provided discussion highlights that NIC-level DNS settings override VNet-level settings. Therefore, statement 2 is false because the NIC settings can override the VNET settings. Statements 1 and 3 accurately reflect the flexibility of Azure's DNS configuration options. **Why other options are incorrect:** The correctness of each answer directly stems from the precedence rules of Azure DNS configuration. Network interface settings override VNet settings. The explanation clarifies why each statement is true or false based on this precedence rule. There is agreement in the provided discussion on these points.
360
[View Question](https://www.examtopics.com/discussions/databricks/view/51501-exam-az-104-topic-5-question-47-discussion/) HOTSPOT - You plan to use Azure Network Watcher to perform the following tasks: ✑ Task1: Identify a security rule that prevents a network packet from reaching an Azure virtual machine. ✑ Task2: Validate outbound connectivity from an Azure virtual machine to an external host. Which feature should you use for each task? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0042400001.jpg) *(Image shows a table with two boxes labeled "Box 1" and "Box 2" needing selections from a dropdown menu. The dropdown options are not visible.)* Show Suggested Answer Hide Answer Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0042500001.jpg) *(Image shows the suggested answer: Box 1: IP flow verify, Box 2: Connection troubleshoot)*
Task 1: **IP flow verify** should be used to identify a security rule preventing a network packet from reaching an Azure VM. This feature allows you to specify source and destination IP addresses, ports, protocol, and direction (inbound/outbound), testing the communication and indicating whether it succeeds or fails due to a security rule. Task 2: **Connection troubleshoot** should be used to validate outbound connectivity from an Azure VM to an external host. This feature tests connections to various targets (VMs, FQDNs, URIs, IPv4 addresses) at a specific point in time, providing information about the connection's success or failure. The provided discussion confirms this solution as correct and explains the functionality of each Network Watcher feature. Why other options are incorrect: The question does not provide alternative options, so this cannot be answered definitively. The correct answer is determined by the functionality of Network Watcher's IP flow verify and Connection troubleshoot features as described in the provided text and confirmed within the discussion.
361
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51523-exam-az-104-topic-5-question-54-discussion/) You have an Azure virtual machine named VM1. The network interface for VM1 is configured as shown in the exhibit. (The exhibit shows a network security group rules table with rules numbered 1-6, with various protocols, ports, priorities and actions (Allow/Deny)). You deploy a web server on VM1, and then create a secure website that is accessible by using the HTTPS protocol. VM1 is used as a web server only. You need to ensure that users can connect to the website from the Internet. What should you do? A. Modify the protocol of Rule4 B. Delete Rule1 C. For Rule5, change the Action to Allow and change the priority to 401 D. Create a new inbound rule that allows TCP protocol 443 and configure the rule to have a priority of 501. **
** C. For Rule5, change the Action to Allow and change the priority to 401. HTTPS uses port 443. Currently, Rule 2 (priority 500) denies traffic on this port. Changing Rule 5's action to "Allow" and its priority to 401 ensures that HTTPS traffic is allowed because Rule 5 will be processed before Rule 2 due to its lower priority number (higher priority). **Why other options are incorrect:** * **A. Modify the protocol of Rule4:** This is irrelevant to HTTPS traffic on port 443. * **B. Delete Rule1:** Deleting Rule 1 might have unintended consequences as it could affect other services. The question doesn't provide enough information to confirm this is a safe action. * **D. Create a new inbound rule that allows TCP protocol 443 and configure the rule to have a priority of 501:** While this would work, it's less efficient than leveraging an existing rule (Rule 5). Creating a new rule is more work and might introduce more potential issues. **Note:** The discussion indicates that there might be other valid solutions, such as changing the priority of Rule 3. The provided answer reflects one of the accepted solutions.
362
[View Question](https://www.examtopics.com/discussions/databricks/view/51528-exam-az-104-topic-5-question-53-discussion/) You have a computer named Computer1 that has a point-to-site VPN connection to an Azure virtual network named VNet1. The point-to-site connection uses a self-signed certificate. From Azure, you download and install the VPN client configuration package on a computer named Computer2. You need to ensure that you can establish a point-to-site VPN connection to VNet1 from Computer2. Solution: You export the client certificate from Computer1 and install the certificate on Computer2. Does this meet the goal? A. Yes B. No
A. Yes This solution meets the goal because a point-to-site VPN connection using a self-signed certificate requires the client machine (Computer2) to have the corresponding certificate installed. By exporting the certificate from Computer1 (which is already successfully connected) and installing it on Computer2, Computer2 will have the necessary credentials to authenticate with the VPN gateway and establish the connection. There is no disagreement reflected in the provided discussion; the suggested answer is unanimously agreed upon as correct.
363
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51529-exam-az-104-topic-6-question-17-discussion/) HOTSPOT - You purchase a new Azure subscription named Subscription1. You create a virtual machine named VM1 in Subscription1. VM1 is not protected by Azure Backup. You need to protect VM1 by using Azure Backup. Backups must be created at 01:00 and stored for 30 days. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **
** To protect VM1 using Azure Backup with backups created at 01:00 and stored for 30 days, you need to: 1. **Create a Recovery Services vault:** This vault will store the backups of your Azure VMs. 2. **Create or utilize a backup policy:** You can either use the default policy (which often backs up daily and retains backups for 30 days), select an existing custom policy, or create a new policy specifying the desired backup schedule and retention period (including the 01:00 backup time and 30-day retention). If the default policy already meets the requirements, selecting that option is sufficient. **Why other options are incorrect (N/A):** The question is a hotspot question and doesn't present other options to select from aside from the correct ones. There are no incorrect options explicitly presented in the image or text. The essence of the correct answer is using both a Recovery Services vault and a backup policy; neither alone is sufficient. **Note:** The provided discussion shows agreement on the solution.
364
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51535-exam-az-104-topic-5-question-68-discussion/) You have an app named App1 that is installed on two Azure virtual machines named VM1 and VM2. Connections to App1 are managed by using an Azure Load Balancer. The effective network security configurations for VM2 are shown in the following exhibit. You discover that connections to App1 from 131.107.100.50 over TCP port 443 fail. You verify that the Load Balancer rules are configured correctly. You need to ensure that connections to App1 can be established successfully from 131.107.100.50 over TCP port 443. Solution: You create an inbound security rule that allows any traffic from the AzureLoadBalancer source and has a priority of 150. Does this meet the goal? A. Yes B. No **
** B. No The suggested solution does not address the root cause of the problem. The discussion highlights that Network Security Group (NSG) rule with priority 200 is blocking inbound traffic, including the Azure Load Balancer health probe. This causes VM2 to appear unhealthy to the load balancer, preventing the load balancer from routing traffic to it, regardless of whether port 443 is open from the client IP. Creating a rule with priority 150 to allow AzureLoadBalancer traffic *might* work, but only if the health probe uses a different port than 443. The core problem is that the health probe is blocked by the higher-priority rule, not the connection from 131.107.100.50. Therefore, the proposed solution only partially addresses the issue and fails to correct the underlying problem of the blocked health probe. A more effective solution would be to modify or remove the blocking NSG rule (priority 200). There is some disagreement within the discussion about the exact port used for the health probe (port 80 or 443). However, the consensus is that the higher priority rule is blocking the health probe, regardless of the port.
365
[LLM error]
366
[View Question](https://www.examtopics.com/discussions/databricks/view/51672-exam-az-104-topic-6-question-16-discussion/) DRAG DROP - You have an Azure Linux virtual machine that is protected by Azure Backup. One week ago, two files were deleted from the virtual machine. You need to restore the deleted files to an on-premises Windows Server 2016 computer as quickly as possible. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0055700001.png)
The correct sequence of actions is: 1. **From the Azure portal, click File Recovery from the vault:** This initiates the file recovery process within the Azure Backup service. 2. **Select a restore point that contains the deleted files:** This chooses the specific point in time from which to retrieve the files. 3. **Download and run the script to mount a drive on the local computer:** Since the VM is Linux, Azure Backup provides a script (likely Python). This script mounts the recovery point as a drive accessible from the Windows Server. 4. **Copy the files using File Explorer:** Once the drive is mounted, File Explorer on the Windows Server can be used to copy the needed files from the mounted drive to their desired location. **Why other options are incorrect (or why this is the preferred approach):** The discussion highlights a disagreement on step 4. Some users suggest using AzCopy to copy files to Blob storage as an intermediate step before transferring to the Windows Server. However, given the requirement to restore "as quickly as possible," directly copying via File Explorer after mounting is faster than the extra steps involved with AzCopy and Blob storage. The provided solution directly addresses the prompt's constraint. Using AzCopy is a valid alternative, but not the fastest method.
367
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51684-exam-az-104-topic-5-question-57-discussion/) You have an app named App1 that is installed on two Azure virtual machines named VM1 and VM2. Connections to App1 are managed by using an Azure Load Balancer. The effective network security configurations for VM2 are shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0044300001.jpg) You discover that connections to App1 from 131.107.100.50 over TCP port 443 fail. You verify that the Load Balancer rules are configured correctly. You need to ensure that connections to App1 can be established successfully from 131.107.100.50 over TCP port 443. Solution: You create an inbound security rule that denies all traffic from the 131.107.100.50 source and has a cost of 64999. Does this meet the goal? A. Yes B. No **
** B. No Creating an inbound security rule that *denies* traffic from 131.107.100.50 on TCP port 443 will *prevent* connections, not enable them. The goal is to *ensure* successful connections, so this solution directly contradicts the objective. The provided solution actively blocks the connections it's supposed to enable. **Why other options are incorrect:** A. Yes is incorrect because the proposed solution blocks the specified IP address and port from connecting. This directly opposes the goal of enabling connections from 131.107.100.50 over TCP port 443. **Note:** The discussion highlights some ambiguity and conflicting information. Some users noted that the "Attach network interface" button in the image was enabled, implying the VM might have been stopped, thus causing connection issues regardless of the security rules. Other users reported seeing a different state in their exam. This suggests the exam question or its accompanying image might not be entirely consistent. The answer provided assumes the VM is running and the network interface is properly attached.
368
[View Question](https://www.examtopics.com/discussions/databricks/view/51691-exam-az-104-topic-5-question-66-discussion/) HOTSPOT - You have an on-premises data center and an Azure subscription. The data center contains two VPN devices. The subscription contains an Azure virtual network named VNet1. VNet1 contains a gateway subnet. You need to create a site-to-site VPN. The solution must ensure that if a single instance of an Azure VPN gateway fails, or a single on-premises VPN device fails, the failure will not cause an interruption that is longer than two minutes. What is the minimum number of public IP addresses, virtual network gateways, and local network gateways required in Azure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0045300001.jpg)
* **Public IP addresses:** 2 * **Virtual network gateways:** 1 * **Local network gateways:** 2 **Explanation:** To achieve high availability and meet the requirement of less than two minutes of downtime in case of failure, an active-active configuration is needed on both the on-premises and Azure sides. * **Public IP Addresses:** Two public IP addresses are needed in Azure to support the active-active configuration of the Azure VPN gateway. Each instance of the gateway requires its own public IP address. * **Virtual Network Gateways:** While an active-active configuration is desirable, Azure VPN Gateways inherently provide built-in high availability with an active-standby configuration. A single Azure VPN gateway provides this redundancy, as one instance is active and the other is a standby, automatically taking over in case of failure. Thus, only one is required in Azure. * **Local Network Gateways:** Because the on-premises network has two VPN devices, and the requirement states that a single on-premises failure shouldn't cause more than two minutes of downtime, you need two local network gateways in Azure, one for each on-premises VPN device. This creates redundancy at the on-premises side, enabling failover if one VPN device fails. **Why other options are incorrect:** The discussion shows some disagreement on the number of local network gateways. While a single Azure VPN gateway handles the Azure-side high availability, the on-premises network needs to be equally redundant to avoid exceeding the two-minute downtime requirement. Therefore, only using one local network gateway is insufficient.
369
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51766-exam-az-104-topic-5-question-51-discussion/) You have an Azure subscription that contains the virtual machines shown in the following table. | Name | Location | Virtual Network | Public IP Address SKU | |---|---|---|---| | VM1 | West US | VNET1 | Standard | | VM2 | West US | VNET1 | Basic | You deploy a load balancer that has the following configurations: ✑ Name: LB1 ✑ Type: Internal ✑ SKU: Standard ✑ Virtual network: VNET1 You need to ensure that you can add VM1 and VM2 to the backend pool of LB1. Solution: You create a Standard SKU public IP address, associate the address to the network interface of VM1, and then stop VM2. Does this meet the goal? A. Yes B. No **
** B. No This solution does not meet the goal. While the solution correctly points out that the Load Balancer (LB1) and the Public IP addresses must match in SKU (both Standard), it fails to address the crucial point that VM2's Basic SKU public IP address is incompatible with the Standard SKU Load Balancer. Even if VM2 were running, its Basic SKU IP prevents it from being added to the backend pool of LB1. The fact that VM2 is stopped is irrelevant to its compatibility with the load balancer. The discussion highlights that VMs must have a Standard SKU public IP or no public IP to be added to a Standard SKU load balancer. **Why other options are incorrect:** A. Yes - This is incorrect because VM2 has a Basic SKU public IP address, which is incompatible with the Standard SKU load balancer (LB1). The load balancer SKU and the public IP address SKU must match. The solution only addresses one VM and ignores the incompatibility of the other. **Note:** There is disagreement in the discussion regarding the solution. While the provided answer reflects the majority opinion, some comments suggest a different approach. The key point of contention is whether matching the Public IP address SKU to the Load Balancer SKU is sufficient to add VMs that aren't in the same availability set, or whether other considerations, like the VM's operating state, are factors. The answer above focuses on the most commonly cited correct solution from the discussion.
370
[View Question](https://www.examtopics.com/discussions/databricks/view/51774-exam-az-104-topic-6-question-19-discussion/) You have an Azure subscription that contains 100 virtual machines. You regularly create and delete virtual machines. You need to identify unattached disks that can be deleted. What should you do? A. From Azure Cost Management, view Cost Analysis B. From Azure Advisor, modify the Advisor configuration C. From Microsoft Azure Storage Explorer, view the Account Management properties D. From Azure Cost Management, view Advisor Recommendations
D. From Azure Cost Management, view Advisor Recommendations. Azure Advisor provides recommendations for optimizing your Azure resources, including identifying unattached disks that can be deleted to reduce costs and improve efficiency. Cost Management itself shows spending, but not necessarily actionable insights into resource optimization like identifying unused disks. Storage Explorer is for managing storage accounts directly, not for identifying orphaned disks across your subscription. Modifying Advisor configuration won't directly show you unattached disks; it changes *how* Advisor works, not what it reports. Why other options are incorrect: * **A. From Azure Cost Management, view Cost Analysis:** Cost Analysis shows spending trends, not specific resources that can be deleted. * **B. From Azure Advisor, modify the Advisor configuration:** This changes Advisor's settings and recommendations, but it doesn't directly identify unattached disks. * **C. From Microsoft Azure Storage Explorer, view the Account Management properties:** Storage Explorer manages storage accounts. It doesn't inherently show which disks are unattached across your entire Azure subscription. Note: The provided discussion only indicates that option D is the correct answer without detailed justification for the other options.
371
**** [View Question](https://www.examtopics.com/discussions/databricks/view/51838-exam-az-104-topic-7-question-1-discussion/) HOTSPOT - You need to configure the Device settings to meet the technical requirements and the user requirements. Which two settings should you modify? To answer, select the appropriate settings in the answer area. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0013000001.png) *(Image not included here, but text from image is included below)* Show Suggested Answer Hide Answer *Image text (Hot Area):* User may join devices to Azure AD; Ensure that when users join devices to Azure Active Directory (Azure AD), the users use a mobile phone to verify their identity. **
** The correct settings to modify are: * **Box 1: Selected - Only selected users should be able to join devices.** This satisfies the user requirement "Ensure that only users who are part of a group named Pilot can join devices to Azure AD." Selecting "Selected" restricts device registration to only approved users. * **Box 2: Yes - Ensure that when users join devices to Azure Active Directory (Azure AD), the users use a mobile phone to verify their identity.** This directly addresses the technical requirement mentioned in the hotspot question. **Why other options are incorrect:** This question only presents two options in the hotspot; therefore, there are no other options to consider incorrect. The provided answer addresses both requirements stated in the problem description. **Note:** The provided solution is based on the user insights in the discussion thread. There's no contradictory information present.
372
**** [View Question](https://www.examtopics.com/discussions/databricks/view/52028-exam-az-104-topic-5-question-59-discussion/) You have an app named App1 that is installed on two Azure virtual machines named VM1 and VM2. Connections to App1 are managed by using an Azure Load Balancer. The effective network security configurations for VM2 are shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0044500001.jpg) You discover that connections to App1 from 131.107.100.50 over TCP port 443 fail. You verify that the Load Balancer rules are configured correctly. You need to ensure that connections to App1 can be established successfully from 131.107.100.50 over TCP port 443. Solution: You modify the priority of the Allow_131.107.100.50 inbound security rule. Does this meet the goal? A. Yes B. No **
** B. No Modifying the priority of the `Allow_131.107.100.50` rule will not solve the problem because it already has the highest priority (100). The issue lies elsewhere, not with the rule's priority. The discussion highlights that the problem is not related to the rule's priority. The provided solution incorrectly assumes priority is the issue. Additional troubleshooting is necessary to identify the root cause of the connection failure. This might involve examining other network security rules, firewall settings on the VM, or the application itself. There is a suggestion in the discussion that Azure Load Balancer health probes and their responses might be the problem. **Why other options are incorrect:** A. Yes - This is incorrect. As explained above, the rule is already at the highest priority. Changing it would have no effect.
373
[View Question](https://www.examtopics.com/discussions/databricks/view/53891-exam-az-104-topic-3-question-35-discussion/) You have a general-purpose v1 Azure Storage account named storage1 that uses locally-redundant storage (LRS). You need to ensure that the data in the storage account is protected if a zone fails. The solution must minimize costs and administrative effort. What should you do first? A. Create a new storage account. B. Configure object replication rules. C. Upgrade the account to general-purpose v2. D. Modify the Replication setting of storage1.
C. Upgrade the account to general-purpose v2. Explanation: The question specifies the need for zone redundancy at minimal cost and effort. Locally Redundant Storage (LRS) only protects against hardware failure within a single data center. To protect against zone failure within a region, Zone Redundant Storage (ZRS) is required. However, ZRS is only supported by general-purpose v2 and premium storage accounts. Therefore, upgrading storage1 to a general-purpose v2 account is the most efficient and cost-effective solution to meet the requirement. Modifying the replication setting of the v1 account (D) won't provide zone redundancy because v1 accounts don't support ZRS. Creating a new storage account (A) is inefficient and increases administrative overhead. Configuring object replication rules (B) is also less efficient for protecting against zone failure, given the ZRS option. Why other options are incorrect: * **A. Create a new storage account:** This introduces unnecessary administrative overhead and cost. * **B. Configure object replication rules:** This is not the most efficient solution for zone redundancy. While replication can be used, it does not automatically provide zone redundancy like ZRS does. * **D. Modify the Replication setting of storage1:** General-purpose v1 storage accounts do not support ZRS; therefore, modifying the replication settings will not provide the necessary zone redundancy. Note: The discussion shows some disagreement on whether ZRS or GRS (Geo-Redundant Storage) is the better approach. The suggested answer and this explanation favor ZRS for its lower cost and direct response to zone failure. However, some discussion points favor GRS for broader geographical protection.
374
**** [View Question](https://www.examtopics.com/discussions/databricks/view/53916-exam-az-104-topic-5-question-71-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. (Image 1 shows a table with VNET1, VNET2, and VNET3). The subscription contains the private DNS zones shown in the following table. (Image 2 shows a table with privateDNSZone1 and privateDNSZone2). You add virtual network links to the private DNS zones as shown in the following table. (Image 3 shows a table indicating VNET links to private DNS zones; VNET1 linked to privateDNSZone1, and VNET3 linked to privateDNSZone2). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image 4 shows three statements to evaluate as Yes or No) **
** The correct answers, based on the provided information and the discussion, are: 1. **Yes:** VNET2 does not have a registration zone. This is true according to the provided diagram. 2. **No:** While a VNET *can* have multiple resolution zones, the statement is too broad and not explicitly supported by the provided context. The provided image shows VNET1 linked to privateDNSZone1 which may not mean it has multiple resolution zones. Auto-registration cannot be modified after the link is created, as noted by TodRose in the discussion. 3. **Yes:** VNET2 does not have a registration zone. This is true according to the provided diagram. **Explanation:** The discussion highlights a key point of contention regarding statement 2. Az_21 initially states that a virtual network can have multiple resolution zones. However, TodRose correctly points out that auto-registration (implied in the context of adding a VNET to a private DNS zone) cannot be changed after the initial link is established. While technically a VNET *could* be linked to multiple resolution zones, the provided scenario and constraints around auto-registration make statement 2 incorrect. Therefore, the suggested answer is inaccurate for statement 2. Statements 1 and 3 are correct based on the diagrams showing VNET2's lack of association with a registration zone. **Why other options are incorrect (as suggested in the discussion):** The initial suggested answer incorrectly assesses statement 2 as "Yes." The discussion clarifies that this is wrong due to the limitations of auto-registration once a VNET is linked to a private DNS zone.
375
**** [View Question](https://www.examtopics.com/discussions/databricks/view/53918-exam-az-104-topic-5-question-72-discussion/) HOTSPOT - You have an Azure subscription. You plan to use an Azure Resource Manager template to deploy a virtual network named VNET1 that will use Azure Bastion. How should you complete the template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0046400001.png) *(Image shows a code snippet with blanks to fill in regarding Azure Resource Manager template deployment of a virtual network with Azure Bastion)* **
** The correct answer requires selecting "AzureBastionSubnet" as the subnet name and specifying a subnet mask of /26 or larger. For example, 10.1.1.0/26. **Explanation:** The discussion highlights a change in Azure Bastion requirements. While an older answer might have suggested a /27 subnet mask, current documentation and user experience strongly indicate that a /26 or larger subnet mask is now necessary for Azure Bastion. The minimum size of `AzureBastionSubnet` is `/26` or larger as of November 2nd, 2021. Using a smaller subnet might have worked in the past but is not recommended and might not function correctly with future updates. **Why other options are incorrect:** Any option specifying a subnet mask smaller than /26 (e.g., /27, as mentioned in one of the older responses) is incorrect based on current Azure Bastion best practices and documentation. The subnet name must be exactly "AzureBastionSubnet" as specified in the documentation to be properly recognized by Azure Bastion. **Note:** The provided question and suggested answers are outdated based on information provided in the discussion. The discussion shows a clear consensus on the updated requirements for Azure Bastion subnet configuration.
376
**** [View Question](https://www.examtopics.com/discussions/databricks/view/53919-exam-az-104-topic-5-question-73-discussion/) You manage a virtual network named VNet1 that is hosted in the West US Azure region. VNet1 hosts two virtual machines named VM1 and VM2 that run Windows Server. You need to inspect all the network traffic from VM1 to VM2 for a period of three hours. Solution: From Azure Network Watcher, you create a packet capture. Does this meet the goal? A. Yes B. No **
** A. Yes Creating a packet capture in Azure Network Watcher will meet the goal. Network Watcher's packet capture feature allows you to capture network traffic for a specified duration (in this case, three hours) and from a specific source (VM1) to a specific destination (VM2). The captured packets can then be analyzed to inspect all network traffic between the two VMs. The discussion overwhelmingly supports this solution. **Why other options are incorrect:** Option B is incorrect because, as explained above, Azure Network Watcher's packet capture functionality directly addresses the requirement of inspecting all network traffic between the two VMs for the specified time period. There is no information provided about other potential solutions, but the discussion mentions incorrect options such as using Azure Monitor metrics or a connection monitor, both of which would not provide the detailed packet-level inspection required.
377
[View Question](https://www.examtopics.com/discussions/databricks/view/53920-exam-az-104-topic-5-question-74-discussion/) You manage a virtual network named VNet1 that is hosted in the West US Azure region. VNet1 hosts two virtual machines named VM1 and VM2 that run Windows Server. You need to inspect all the network traffic from VM1 to VM2 for a period of three hours. Solution: From Azure Network Watcher, you create a connection monitor. Does this meet the goal? A. Yes B. No
B. No. A connection monitor in Azure Network Watcher does not meet the goal of inspecting *all* network traffic from VM1 to VM2. The discussion highlights that connection monitors only inspect traffic on specific ports, while the requirement is to inspect all traffic. Furthermore, while a packet capture *would* meet the goal, the provided solution uses a connection monitor. There is a clear consensus among the discussion participants that using a connection monitor is insufficient. Other options: Option A is incorrect because connection monitors are not designed to capture all network traffic between two VMs; they focus on specific ports and connections. The discussion clearly demonstrates that this solution is inadequate for the stated goal.
378
**** [View Question](https://www.examtopics.com/discussions/databricks/view/53940-exam-az-104-topic-3-question-36-discussion/) You have an Azure subscription that contains the storage accounts shown in the following table. | Storage Account | Account Type | Account Kind | |---|---|---| | storage1 | Standard | BlobStorage | | storage2 | Premium | BlobStorage | | storage3 | Standard | BlobStorage | | storage4 | Premium | FileStorage | You plan to manage the data stored in the accounts by using lifecycle management rules. To which storage accounts can you apply lifecycle management rules? A. storage1 only B. storage1 and storage2 only C. storage3 and storage4 only D. storage1, storage2, and storage3 only E. storage1, storage2, storage3, and storage4 **
** D. storage1, storage2, and storage3 only Lifecycle management rules can be applied to general-purpose v2 (GPv2), Blob storage, and Premium block blob storage accounts. Based on the table provided, storage1, storage2, and storage3 meet these criteria. Storage4 is a Premium FileStorage account, and while the discussion shows some disagreement about whether FileStorage accounts support lifecycle management, the prevailing consensus from multiple users points to it not being supported. Therefore, option D is the most accurate based on the provided information and the discussion's general agreement. **Why other options are incorrect:** * **A. storage1 only:** This is incorrect because storage2 and storage3 are also eligible for lifecycle management. * **B. storage1 and storage2 only:** This is incorrect because storage3 is also eligible. * **C. storage3 and storage4 only:** This is incorrect because storage4 (FileStorage) is not confirmed as supporting lifecycle management by the majority of users in the discussion. * **E. storage1, storage2, storage3, and storage4:** This is incorrect because the support for lifecycle management on storage4 (FileStorage) is disputed and not confirmed. **Note:** The discussion highlights some confusion and conflicting information regarding the support of lifecycle management on FileStorage accounts. The answer provided reflects the majority opinion within the discussion, but complete clarity on this specific point is not definitively established in the provided text.
379
[View Question](https://www.examtopics.com/discussions/databricks/view/54730-exam-az-104-topic-5-question-75-discussion/) You manage a virtual network named VNet1 that is hosted in the West US Azure region. VNet1 hosts two virtual machines named VM1 and VM2 that run Windows Server. You need to inspect all the network traffic from VM1 to VM2 for a period of three hours. Solution: From Performance Monitor, you create a Data Collector Set (DCS). Does this meet the goal? A. Yes B. No
B. No Explanation: The provided solution, using Performance Monitor to create a Data Collector Set (DCS) on the VMs, is not the correct approach for inspecting network traffic between two VMs in an Azure environment. A DCS is designed for collecting performance data from a *single* machine, not for capturing network traffic between multiple machines. To inspect network traffic between VM1 and VM2 in Azure, a packet capture using Azure Network Watcher is the appropriate tool. This allows for capturing and analyzing network packets traveling between the two VMs within the specified timeframe. The discussion overwhelmingly supports this conclusion. Why other options are incorrect: Option A (Yes) is incorrect because a Data Collector Set is not suitable for capturing network traffic between VMs. The suggested answer, and the community discussion, confirms this.
380
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56296-exam-az-104-topic-1-question-34-discussion/) Your company's Azure subscription includes Azure virtual machines (VMs) that run Windows Server 2016. One of the VMs is backed up every day using Azure Backup Instant Restore. When the VM becomes infected with data encrypting ransomware, you decide to recover the VM's files. Which of the following is TRUE in this scenario? A. You can only recover the files to the infected VM. B. You can recover the files to any VM within the company's subscription. C. You can only recover the files to a new VM. D. You will not be able to recover the files. **
** B. You can recover the files to any VM within the company's subscription. Azure Backup Instant Restore allows for recovery to a different VM within the same subscription. The question states that all VMs in the subscription run Windows Server 2016, so the recovered files can be placed on any of these VMs. The discussion highlights some disagreement, with some users suggesting that recovery is limited to a new VM (option C) or only the infected VM (option A). However, based on the understanding of Azure Backup Instant Restore capabilities, option B is the most accurate. **WHY OTHER OPTIONS ARE INCORRECT:** * **A. You can only recover the files to the infected VM.** Incorrect. Azure Backup Instant Restore offers flexibility in recovery location. Restoring to the infected VM might be risky, especially given the ransomware infection. * **C. You can only recover the files to a new VM.** Incorrect. While creating a new VM is an option, it's not the only option. Instant Restore allows more flexibility in recovery destination within the subscription. * **D. You will not be able to recover the files.** Incorrect. Azure Backup Instant Restore is designed for recovery, making this option false. The discussion shows some debate regarding the specifics of Azure Backup Instant Restore. The provided answer represents the most likely interpretation given the information and commonly understood functionalities of the service.
381
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56297-exam-az-104-topic-1-question-35-discussion/) Your company's Azure subscription includes Azure virtual machines (VMs) that run Windows Server 2016. One of the VMs is backed up every day using Azure Backup Instant Restore. When the VM becomes infected with data encrypting ransomware, you are required to restore the VM. Which of the following actions should you take? A. You should restore the VM after deleting the infected VM. B. You should restore the VM to any VM within the company's subscription. C. You should restore the VM to a new Azure VM. D. You should restore the VM to an on-premise Windows device. **
** C. You should restore the VM to a new Azure VM. Restoring the VM to a new Azure VM ensures a clean environment, free from the ransomware infection present on the original VM. This prevents re-infection and potential data loss. Creating a new VM also avoids potential conflicts with existing configurations. **Why other options are incorrect:** * **A. You should restore the VM after deleting the infected VM:** While the discussion shows some disagreement on this point, the consensus and the suggested answer lean towards option C. Deleting the infected VM first might seem logical to remove the threat, but the process of restoring to the deleted VM may not always be possible depending on Azure Backup settings. * **B. You should restore the VM to any VM within the company's subscription:** Restoring to an existing VM within the subscription risks spreading the ransomware to that machine. * **D. You should restore the VM to an on-premise Windows device:** This option is not a practical or supported method for restoring Azure VMs backed up with Azure Backup Instant Restore. **Note:** The discussion shows some debate regarding option A, with some users arguing that restoring to the deleted VM *is* possible, and even preferable for its speed. However, the prevailing opinion and the suggested answer favor option C for its security and simplicity.
382
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56338-exam-az-104-topic-1-question-12-discussion/) Your company has a Microsoft Azure subscription. The company has datacenters in Los Angeles and New York. You are configuring the two datacenters as geo-clustered sites for site resiliency. You need to recommend an Azure storage redundancy option. You have the following data storage requirements: ✑ Data must be stored on multiple nodes. ✑ Data must be stored on nodes in separate geographic locations. ✑ Data can be read from the secondary location as well as from the primary location. Which of the following Azure stored redundancy options should you recommend? A. Geo-redundant storage B. Read-only geo-redundant storage C. Zone-redundant storage D. Locally redundant storage **
** B. Read-only geo-redundant storage (RAGRS) The question specifies that data must be stored in separate geographic locations (Los Angeles and New York) and that data must be readable from the secondary location. Geo-redundant storage (GRS) replicates data to a secondary region, but read access to the secondary region is only possible during a failover initiated by Microsoft. Read-access to the secondary region is not permitted otherwise. Read-only geo-redundant storage (RAGRS) satisfies all requirements as data is stored in multiple geographic locations and is readable from both primary and secondary locations. Zone-redundant storage (ZRS) keeps data within the same region, and locally redundant storage (LRS) keeps data within a single data center—neither meeting the requirement for separate geographic locations. **Why other options are incorrect:** * **A. Geo-redundant storage (GRS):** GRS replicates data geographically, but read access to the secondary location is limited to only during a failover event initiated by Microsoft. This does not meet the requirement for read access from the secondary location. * **C. Zone-redundant storage (ZRS):** ZRS replicates data within a single region and across availability zones. This doesn't fulfill the requirement of storing data in separate geographic locations (different regions). * **D. Locally redundant storage (LRS):** LRS stores data within a single data center, failing to satisfy the requirement of separate geographic locations. **Note:** While the suggested answer is B, there is a discussion point about the limitations of GRS and RAGRS. The user Steve1983 highlights the read-only nature of GRS unless a failover is initiated, reinforcing the selection of RAGRS as the more appropriate choice because it explicitly allows reads from the secondary region.
383
[View Question](https://www.examtopics.com/discussions/databricks/view/56349-exam-az-104-topic-2-question-15-discussion/) You have an Azure subscription that contains a resource group named RG26. RG26 is set to the West Europe location and is used to create temporary resources for a project. RG26 contains the resources shown in the following table. | Resource Type | Resource Name | Location | |-----------------|----------------|---------------| | Virtual Machine | VM1 | West Europe | | SQL Database | SQLDB01 | West Europe | | Storage Account | sa001 | West Europe | SQLDB01 is backed up to RGV1. When the project is complete, you attempt to delete RG26 from the Azure portal. The deletion fails. You need to delete RG26. What should you do first? A. Delete VM1 B. Stop VM1 C. Stop the backup of SQLDB01 D. Delete sa001
C. Stop the backup of SQLDB01 Explanation: The discussion strongly suggests that an ongoing backup of SQLDB01 is preventing the deletion of the resource group RG26. Stopping the backup allows the resource group to be deleted. The other options are incorrect because deleting or stopping VMs and deleting storage accounts do not directly interfere with resource group deletion. The consensus amongst users in the discussion strongly supports this answer. Why other options are incorrect: * **A. Delete VM1:** Deleting the VM doesn't resolve the backup issue preventing RG26 deletion. * **B. Stop VM1:** Stopping the VM doesn't affect the ongoing backup of SQLDB01. * **D. Delete sa001:** Deleting the storage account doesn't address the backup process preventing RG26 deletion. Note: The provided discussion shows a strong consensus on the correct answer.
384
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56352-exam-az-104-topic-1-question-17-discussion/) You have an Azure virtual machine (VM) that has a single data disk. You have been tasked with attaching this data disk to another Azure VM. You need to make sure that your strategy allows for the virtual machines to be offline for the least amount of time possible. Which of the following is the action you should take FIRST? A. Stop the VM that includes the data disk. B. Stop the VM that the data disk must be attached to. C. Detach the data disk. D. Delete the VM that includes the data disk. **
** C. Detach the data disk. Detaching the data disk is the first step to minimize downtime. The disk can be detached from the source VM without stopping it, and then attached to the destination VM. Stopping the VMs adds unnecessary downtime. **Why other options are incorrect:** * **A. Stop the VM that includes the data disk:** While stopping the VM *might* be considered a best practice to prevent data corruption, it is not necessary and adds significant downtime compared to simply detaching the disk. The question explicitly asks for the action to minimize downtime. Many users in the discussion noted that it is possible to detach the disk without stopping either VM. * **B. Stop the VM that the data disk must be attached to:** Similar to option A, this adds unnecessary downtime. The destination VM does not need to be stopped for the disk attachment. * **D. Delete the VM that includes the data disk:** This is destructive and not the appropriate action for moving a data disk to another VM. **Note:** There is disagreement in the discussion regarding the best practice. Some users argue that stopping the source VM is necessary to avoid data corruption. However, the majority consensus, and the most efficient method to meet the requirement of minimal downtime, supports detaching the disk first.
385
[View Question](https://www.examtopics.com/discussions/databricks/view/56389-exam-az-104-topic-5-question-16-discussion/) You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure? A. Floating IP (direct server return) to Disabled B. Session persistence to None C. Floating IP (direct server return) to Enabled D. Session persistence to Client IP
D. Session persistence to Client IP Explanation: To ensure that each client consistently connects to the same web server, you need to enable session persistence on the Azure load balancer. Session persistence associates a client's session (often identified by its IP address) with a specific backend server. Choosing "Client IP" for session persistence means subsequent requests from the same client IP address will be directed to the same server, fulfilling the requirement. Why other options are incorrect: * **A. Floating IP (direct server return) to Disabled:** Floating IPs are not directly related to maintaining session persistence. Disabling it won't ensure a client always connects to the same server. * **B. Session persistence to None:** This would disable session persistence, leading to clients being load balanced across all servers randomly, defeating the objective. * **C. Floating IP (direct server return) to Enabled:** While Floating IPs can improve high availability, they don't inherently manage session persistence. Note: The provided discussion shows a consensus on the correct answer (D).
386
[View Question](https://www.examtopics.com/discussions/databricks/view/56406-exam-az-104-topic-1-question-16-discussion/) Your company has three virtual machines (VMs) that are included in an availability set. You try to resize one of the VMs, which returns an allocation failure message. It is imperative that the VM is resized. Which of the following actions should you take? A. You should only stop one of the VMs. B. You should stop two of the VMs. C. You should stop all three VMs. D. You should remove the necessary VM from the availability set.
C. You should stop all three VMs. Explanation: An allocation failure when resizing a VM within an availability set often indicates a resource constraint within that set. Stopping all VMs in the availability set frees up resources, allowing the resize operation to succeed. This is because the availability set manages resources for its VMs collectively. Stopping only some VMs might not release enough resources. Removing the VM from the availability set is not necessary and would disrupt the high availability provided by the set. Why other options are incorrect: * **A. You should only stop one of the VMs:** This might not release sufficient resources to allow the resize. * **B. You should stop two of the VMs:** Similar to A, this might not be enough to free up the necessary resources. * **D. You should remove the necessary VM from the availability set:** This is not the recommended solution as it compromises the availability set's high-availability features. Note: The provided discussion shows unanimous agreement on answer C.
387
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56448-exam-az-104-topic-5-question-4-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0035200001.png) LB1 is configured as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0035200002.png) You plan to create new inbound NAT rules that meet the following requirements: ✑ Provide Remote Desktop access to VM1 from the internet by using port 3389. ✑ Provide Remote Desktop access to VM2 from the internet by using port 3389. What should you create on LB1 before you can create the new inbound NAT rules? A. a frontend IP address B. a load balancing rule C. a health probe D. a backend pool **
** A. a frontend IP address To create new inbound NAT rules on LB1 to allow remote desktop access (port 3389) to both VM1 and VM2 from the internet, you need a frontend IP address for each VM. Since both VMs need to be accessible on the same port (3389), using a single frontend IP address is insufficient because inbound NAT rules are associated with frontend IP addresses. Each VM requires its own frontend IP address mapped to its specific private IP address. Therefore, you'd create a new frontend IP address for each VM before creating the associated NAT rules. **Why other options are incorrect:** * **B. a load balancing rule:** Load balancing rules distribute traffic across multiple backend VMs. While this is useful for high availability, it's not directly required *before* creating inbound NAT rules. Inbound NAT rules handle the mapping of public to private ports. * **C. a health probe:** Health probes check the health of backend VMs. This is important for load balancing but isn't a prerequisite for inbound NAT rules. * **D. a backend pool:** A backend pool is a collection of VMs to which traffic is directed. You would need a backend pool to use a load balancing rule, but the question is about creating the NAT rules *before* any other configuration for distributing that traffic. **Note:** The discussion shows disagreement on the correct answer. Some argue that a frontend IP address is necessary (and this is the suggested answer), while others believe a load balancing rule is needed. However, given the need to access each VM on the same port (3389) from the internet, the correct answer is to create separate frontend IP addresses for each VM, thus supporting the suggested answer A.
388
[View Question](https://www.examtopics.com/discussions/databricks/view/56828-exam-az-104-topic-1-question-1-discussion/) Your company has several departments. Each department has a number of virtual machines (VMs). The company has an Azure subscription that contains a resource group named RG1. All VMs are located in RG1. You want to associate each VM with its respective department. What should you do? A. Create Azure Management Groups for each department. B. Create a resource group for each department. C. Assign tags to the virtual machines. D. Modify the settings of the virtual machines.
C. Assign tags to the virtual machines. Explanation: Tags are key-value pairs that can be applied to Azure resources. In this scenario, assigning tags like "Department=Sales," "Department=Marketing," etc., to the VMs allows you to easily organize and identify VMs belonging to each department. This method avoids the overhead of creating new resource groups (B) or management groups (A) and doesn't require modifying the VMs' core settings (D). Using tags is the most efficient and straightforward way to achieve the desired association within the existing resource group structure. Why other options are incorrect: * **A. Create Azure Management Groups for each department:** Management groups are used for hierarchical organization across multiple subscriptions, which is not necessary here since all VMs are within a single subscription. * **B. Create a resource group for each department:** Creating separate resource groups for each department adds unnecessary complexity. It's a less efficient solution than using tags, especially when all VMs are already within the same resource group. * **D. Modify the settings of the virtual machines:** Modifying VM settings might have unintended consequences and isn't the intended approach for simply associating VMs with departments. Tags are designed specifically for this purpose. Note: The discussion shows unanimous agreement on option C as the correct answer.
389
[View Question](https://www.examtopics.com/discussions/databricks/view/56829-exam-az-104-topic-1-question-4-discussion/) Your company has an Azure Active Directory (Azure AD) subscription. You want to implement an Azure AD conditional access policy. The policy must be configured to require members of the Global Administrators group to use Multi-Factor Authentication (MFA) and an Azure AD-joined device when they connect to Azure AD from untrusted locations. Solution: You access the Azure portal to alter the grant control of the Azure AD conditional access policy. Does the solution meet the goal? A. Yes B. No
A. Yes The solution meets the goal because Azure AD Conditional Access policies allow for the configuration of MFA requirements and device compliance policies (like requiring Azure AD-joined devices) based on various conditions, including location. By altering the grant controls within a Conditional Access policy in the Azure portal, an administrator can specify that members of the Global Administrators group must satisfy these conditions (MFA and Azure AD-joined device) when accessing Azure AD from untrusted locations. Why other options are incorrect: B. No: This is incorrect because the described solution directly addresses the problem statement. The Azure portal's Conditional Access feature provides the necessary controls. Note: The discussion highlights a potential point of confusion. There's mention of another similar question where using the MFA page in the Azure portal (rather than the Conditional Access policy page) is considered incorrect for setting up a conditional access policy. This emphasizes that the correct path to achieve the described goal is through the Conditional Access Policy settings within the Azure portal, not through separate MFA settings.
390
[View Question](https://www.examtopics.com/discussions/databricks/view/56832-exam-az-104-topic-1-question-11-discussion/) Your company has an Azure Active Directory (Azure AD) tenant named weyland.com that is configured for hybrid coexistence with the on-premises Active Directory domain. You have a server named DirSync1 that is configured as a DirSync server. You create a new user account in the on-premise Active Directory. You now need to replicate the user information to Azure AD immediately. Solution: You restart the NetLogon service on a domain controller. Does the solution meet the goal? A. Yes B. No
B. No. Restarting the NetLogon service does not immediately replicate user information to Azure AD. To immediately replicate the new user account, you need to use the Azure AD Connect synchronization tool and run `Start-ADSyncSyncCycle -PolicyType Delta` from a PowerShell prompt. This performs a delta synchronization, syncing only the changes since the last synchronization cycle, which is much faster than a full synchronization (`Start-ADSyncSyncCycle -PolicyType Initial`). The discussion shows some disagreement on whether a full sync is ever necessary in this scenario, but the consensus is that a delta sync is the correct and faster approach for immediate replication. WHY OTHER OPTIONS ARE INCORRECT: A. Yes is incorrect because restarting the NetLogon service is not the correct method to immediately synchronize changes from on-premises Active Directory to Azure AD. The NetLogon service is responsible for managing network logon information, not directly for synchronizing user accounts with Azure AD.
391
**** [View Question](https://www.examtopics.com/discussions/databricks/view/56866-exam-az-104-topic-4-question-73-discussion/) HOTSPOT - You have an Azure subscription. You need to use an Azure Resource Manager (ARM) template to create a virtual machine that will have multiple data disks. How should you complete the template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0032800001.png) **
** The correct answer involves using the `copy` element within the ARM template's resources section to create multiple data disks. The `copy` element's `name` property should be set to something like `dataDisks`, and the `count` property should specify the desired number of data disks. The `copyIndex` variable is then used within the individual data disk definitions to differentiate each disk (e.g., `[copyIndex]`). This allows the template to iterate and create the specified number of data disks with unique names. The provided suggested answer image shows this correct implementation. The discussion strongly supports this answer, with multiple users confirming its accuracy and referencing Microsoft documentation. **Why other options are incorrect:** The question is a hotspot question, and without the image showing incorrect options, it's impossible to specify why they are incorrect. However, any solution that *doesn't* use the `copy` loop to generate multiple data disk objects would be incorrect as it would not dynamically create the required number of disks. A manual entry for each data disk would not be scalable or efficient.
392
[View Question](https://www.examtopics.com/discussions/databricks/view/56867-exam-az-104-topic-4-question-3-discussion/) You deploy an Azure Kubernetes Service (AKS) cluster named AKS1. You need to deploy a YAML file to AKS1. Solution: From Azure CLI, you run azcopy. Does this meet the goal? A. Yes B. No
B. No Azcopy is used for copying blobs or files to and from Azure storage. It is not the correct tool to deploy a YAML file to an AKS cluster. The correct command to deploy a YAML file to an AKS cluster is `kubectl apply -f example.yaml`, as noted in the discussion. Therefore, using azcopy does not meet the stated goal. The discussion overwhelmingly supports answer B. While some users mention the exam might have changed or the question might be outdated, the core concept of using `kubectl apply` to deploy YAML files to AKS remains consistent.
393
[View Question](https://www.examtopics.com/discussions/databricks/view/56868-exam-az-104-topic-4-question-69-discussion/) You deploy an Azure Kubernetes Service (AKS) cluster named AKS1. You need to deploy a YAML file to AKS1. Solution: From Azure Cloud Shell, you run az aks. Does this meet the goal? A. Yes B. No
B. No. The command `az aks` is used to manage AKS clusters themselves (create, update, delete, etc.), not to deploy YAML files *to* an already existing AKS cluster. To deploy a YAML file to an AKS cluster, you need to use the `kubectl apply -f .yaml` command. The discussion shows a strong consensus on this point. WHY OTHER OPTIONS ARE INCORRECT: Option A is incorrect because the provided solution (`az aks`) does not deploy YAML files to a Kubernetes cluster. It manages the cluster itself. The discussion clearly indicates that `kubectl apply -f example.yaml` is the correct command. Note that there is some conflicting information within the discussion regarding whether the question itself might have multiple versions or be slightly ambiguous; however, the overwhelming consensus supports answer B.
394
[View Question](https://www.examtopics.com/discussions/databricks/view/56963-exam-az-104-topic-1-question-33-discussion/) Your company has an Azure Active Directory (Azure AD) subscription. You need to deploy five virtual machines (VMs) to your company's virtual network subnet. The VMs will each have both a public and private IP address. Inbound and outbound security rules for all of these virtual machines must be identical. Which of the following is the least amount of security groups needed for this configuration? A. 4 B. 3 C. 2 D. 1
The correct answer is **D. 1**. Since all five VMs require identical inbound and outbound security rules, only one Network Security Group (NSG) is needed. The NSG can be associated with all five VMs, effectively applying the same security rules to each. Other Options: Options A, B, and C are incorrect because they propose more NSGs than necessary. The problem statement explicitly states that the inbound and outbound security rules must be identical for all VMs. Using multiple NSGs would be redundant and unnecessarily complex. There is some disagreement in the discussion, with one user suggesting 3 NSGs (one for public IPs, one for private IPs, and one for the common rules). However, a single NSG is sufficient and the most efficient solution, as confirmed by the majority of responses.
395
[View Question](https://www.examtopics.com/discussions/databricks/view/56992-exam-az-104-topic-1-question-8-discussion/) Your company's Azure solution uses Multi-Factor Authentication (MFA) for users outside the office. The Per Authentication option is configured as the usage model. After acquiring a smaller business and adding new employees to Azure Active Directory (Azure AD), these employees also need to use MFA. To achieve this, the Per Enabled User setting must be set for the usage model. Solution: You create a new Multi-Factor Authentication provider with a backup from the existing Multi-Factor Authentication provider data. Does the solution meet the goal? A. Yes B. No
B. No. The proposed solution of creating a new MFA provider with a backup of the existing data does *not* change the usage model from "Per Authentication" to "Per Enabled User". Creating a new provider simply creates a duplicate; it doesn't alter the fundamental setting that dictates how MFA is applied (to each authentication attempt or to each enabled user). To meet the goal, the MFA settings must be explicitly changed to the "Per Enabled User" model. The consensus among the discussion responses is that this solution is incorrect.
396
[View Question](https://www.examtopics.com/discussions/databricks/view/57002-exam-az-104-topic-1-question-18-discussion/) Your company has an Azure subscription. You need to deploy a number of Azure virtual machines (VMs) using Azure Resource Manager (ARM) templates. You have been informed that the VMs will be included in a single availability set. You are required to make sure that the ARM template you configure allows for as many VMs as possible to remain accessible in the event of fabric failure or maintenance. Which of the following is the value that you should configure for the platformFaultDomainCount property? A. 10 B. 30 C. Min Value D. Max Value
D. Max Value Explanation: The `platformFaultDomainCount` property in an Azure ARM template defines the number of fault domains used by an availability set. Fault domains are groups of hardware that share a common power source and network switch. Distributing VMs across multiple fault domains increases resilience against failures. Azure's maximum value for `platformFaultDomainCount` is 3. Setting it to "Max Value" (which resolves to 3) ensures the highest level of redundancy and maximizes the number of VMs that remain accessible during failures or maintenance. Options A and B are incorrect because they exceed the maximum supported value. Option C is less desirable because it only utilizes a single fault domain, leaving the VMs vulnerable to a single point of failure. Note: The provided discussion does not present conflicting opinions.
397
[View Question](https://www.examtopics.com/discussions/databricks/view/57004-exam-az-104-topic-1-question-32-discussion/) Your company has an Azure Active Directory (Azure AD) subscription. You need to deploy five virtual machines (VMs) to your company's virtual network subnet. The VMs will each have both a public and private IP address. Inbound and outbound security rules for all of these virtual machines must be identical. Which of the following is the least amount of network interfaces needed for this configuration? A. 5 B. 10 C. 20 D. 40
The correct answer is A. 5 Each virtual machine (VM) in Azure requires a network interface (NIC) to connect to the virtual network. Since the question specifies that five VMs need to be deployed, and each VM needs at least one NIC, the minimum number of NICs required is five. While each NIC can have multiple IP addresses (both public and private), the need for one NIC per VM remains. The identical inbound and outbound security rules can be applied through Network Security Groups (NSGs) attached to the NICs or the subnet. Why other options are incorrect: * **B. 10, C. 20, D. 40:** These options suggest more NICs than necessary. A single NIC can handle both a public and a private IP address for each VM. There's no requirement for separate NICs for public and private IP addresses within a single VM. Note: The discussion shows unanimous agreement on the correct answer.
398
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57017-exam-az-104-topic-13-question-2-discussion/) HOTSPOT - You need to meet the connection requirements for the New York office. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Scenario: Connect the New York office to VNet1 over the Internet by using an encrypted connection. **
** To connect the New York office to VNet1 over the internet using an encrypted connection, you need to: 1. **Create a virtual network gateway and a local network gateway.** The virtual network gateway resides in Azure and acts as the VPN endpoint on the Azure side. The local network gateway represents your on-premises network (the New York office) and its VPN device. 2. **Configure a site-to-site VPN connection.** This establishes the encrypted connection between the virtual network gateway and the local network gateway. This uses IPSec/IKE to encrypt the traffic traversing the internet. **Why other options are incorrect (based on provided text):** The discussion explicitly states that ExpressRoute is incorrect because it's a private connection and doesn't use the internet. The question specifically requires a connection *over the internet*. **Note:** The provided text and discussion strongly support the given answer. There is no evidence of conflicting opinions regarding the correct solution within the provided context.
399
[View Question](https://www.examtopics.com/discussions/databricks/view/57018-exam-az-104-topic-16-question-2-discussion/) You need to recommend a solution to automate the configuration for the finance department users. The solution must meet the technical requirements. What should you include in the recommendation? A. Azure AD B2C B. dynamic groups and conditional access policies C. Azure AD Identity Protection D. an Azure logic app and the Microsoft Identity Management (MIM) client
B. dynamic groups and conditional access policies Explanation: Dynamic groups automatically add or remove users based on defined rules (e.g., membership in a specific organizational unit or possession of certain attributes). Conditional Access policies then apply specific access controls based on the user's group membership, location, device, or other attributes. This combination provides an automated and granular solution to configure access for the finance department. Why other options are incorrect: * **A. Azure AD B2C:** While Azure AD B2C is useful for customer-facing identity management, it's not the best fit for managing internal users within an organization. * **C. Azure AD Identity Protection:** This service focuses on detecting and mitigating identity-related risks. While important for security, it doesn't directly automate user configuration. * **D. an Azure logic app and the Microsoft Identity Management (MIM) client:** This option is overly complex and less efficient for this task than the combination of dynamic groups and conditional access policies. MIM is also a legacy solution. Note: The discussion mentions that "Many given answers here are wrong," indicating some disagreement on the correct answer. However, the highly upvoted responses consistently support option B as the correct solution.
400
[View Question](https://www.examtopics.com/discussions/databricks/view/57044-exam-az-104-topic-1-question-2-discussion/) Your company has an Azure Active Directory (Azure AD) subscription. You want to implement an Azure AD conditional access policy. The policy must be configured to require members of the Global Administrators group to use Multi-Factor Authentication (MFA) and an Azure AD-joined device when they connect to Azure AD from untrusted locations. Solution: You access the multi-factor authentication page to alter the user settings. Does the solution meet the goal? A. Yes B. No
B. No The solution does not meet the goal. Modifying user settings on the multi-factor authentication page only applies MFA on a per-user basis. The requirement is to implement a *conditional access policy*, which allows for more granular control based on conditions like user group, location, and device type. A conditional access policy can enforce MFA and Azure AD-joined device requirements for the Global Administrators group specifically when accessing Azure AD from untrusted locations. The suggested solution only addresses MFA and doesn't incorporate the conditional aspects of location and device type. Why other options are incorrect: A. Yes - Incorrect. This option fails to understand the core difference between setting MFA on a per-user basis and implementing a conditional access policy to manage access based on multiple conditions (user group, location, and device type). Note: While the provided discussion unanimously agrees on answer B, the detailed explanation of *why* B is correct varies slightly in the level of technical detail.
401
[View Question](https://www.examtopics.com/discussions/databricks/view/57045-exam-az-104-topic-1-question-10-discussion/) Your company has an Azure Active Directory (Azure AD) tenant named weyland.com that is configured for hybrid coexistence with the on-premises Active Directory domain. You have a server named DirSync1 that is configured as a DirSync server. You create a new user account in the on-premise Active Directory. You now need to replicate the user information to Azure AD immediately. Solution: You use Active Directory Sites and Services to force replication of the Global Catalog on a domain controller. Does the solution meet the goal? A. Yes B. No
B. No The solution of using Active Directory Sites and Services to force replication of the Global Catalog does *not* immediately replicate the user information to Azure AD. Forcing Global Catalog replication only ensures that all domain controllers have the updated information; it doesn't trigger the synchronization process with Azure AD Connect. To immediately replicate the user information, you need to initiate a synchronization cycle through the Azure AD Connect Synchronization Service Manager, as indicated in the discussion comments. Other Options: Option A is incorrect because forcing Global Catalog replication alone is insufficient to immediately update Azure AD. The synchronization between on-premises Active Directory and Azure AD is a separate process managed by Azure AD Connect.
402
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57055-exam-az-104-topic-12-question-1-discussion/) You discover that VM3 does NOT meet the technical requirements. You need to verify whether the issue relates to the NSGs. What should you use? A. Diagram in VNet1 B. Diagnostic settings in Azure Monitor C. Diagnose and solve problems in Traffic Manager profiles D. The security recommendations in Azure Advisor E. IP flow verify in Azure Network Watcher **
** E. IP flow verify in Azure Network Watcher **Explanation:** The question specifically asks how to verify if an issue with VM3 relates to Network Security Groups (NSGs). Azure Network Watcher's IP flow verify feature is designed precisely for this purpose. It allows you to trace network traffic and identify if NSGs are blocking traffic to or from a virtual machine. The `Test-AzNetworkWatcherIPFlow` PowerShell cmdlet, mentioned in the discussion, further supports this. **Why other options are incorrect:** * **A. Diagram in VNet1:** A diagram provides a visual representation of the network, but doesn't actively test connectivity or NSG rules. * **B. Diagnostic settings in Azure Monitor:** Azure Monitor is broader; it collects logs and metrics, but it's not the specific tool to diagnose NSG-related connectivity problems. * **C. Diagnose and solve problems in Traffic Manager profiles:** Traffic Manager handles DNS load balancing; it's unrelated to VM connectivity or NSGs. * **D. The security recommendations in Azure Advisor:** Azure Advisor offers general security recommendations. While helpful for overall security posture, it won't pinpoint a specific NSG issue blocking traffic to VM3. **Note:** The discussion includes several opinions supporting the correct answer, with users citing specific PowerShell cmdlets for verification. There is no significant disagreement on the best approach.
403
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57060-exam-az-104-topic-1-question-21-discussion/) Your company has an Azure Active Directory (Azure AD) tenant that is configured for hybrid coexistence with the on-premises Active Directory domain. The on-premise virtual environment consists of virtual machines (VMs) running on Windows Server 2012 R2 Hyper-V host servers. You have created some PowerShell scripts to automate the configuration of newly created VMs. You plan to create several new VMs. You need a solution that ensures the scripts are run on the new VMs. Which of the following is the best solution? A. Configure a SetupComplete.cmd batch file in the %windir%\setup\scripts directory. B. Configure a Group Policy Object (GPO) to run the scripts as logon scripts. C. Configure a Group Policy Object (GPO) to run the scripts as startup scripts. D. Place the scripts in a new virtual hard disk (VHD). **
** A. Configure a SetupComplete.cmd batch file in the %windir%\setup\scripts directory. **Explanation:** The `SetupComplete.cmd` file is executed by the Windows operating system during the setup process, after the installation is complete but before the user logs in. This ensures the scripts run automatically on each newly created VM regardless of any user login or GPO configuration. **Why other options are incorrect:** * **B. Configure a Group Policy Object (GPO) to run the scripts as logon scripts:** While GPOs can run scripts, they require a user login, which is not guaranteed immediately after VM creation. The scripts might not run if the VM isn't immediately logged into. Additionally, some discussion participants incorrectly assume that GPOs do not apply to on-premises environments, however, this is incorrect as the question clearly states that the environment is a hybrid one and the VMs are on-premises. * **C. Configure a Group Policy Object (GPO) to run the scripts as startup scripts:** Similar to option B, this relies on the timing of the startup process and the VM might not be configured appropriately to run this script. * **D. Place the scripts in a new virtual hard disk (VHD):** Simply placing the scripts on a VHD does not automatically execute them. Additional mechanisms are required to trigger script execution. **Note:** There is disagreement among the discussion participants regarding the correct answer. Some argue for option C (using GPOs), but the most widely accepted and technically sound solution is A, given the requirement to run the script automatically *after* the VM is created and before any user interaction.
404
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57061-exam-az-104-topic-1-question-31-discussion/) Your company has two on-premises servers named SRV01 and SRV02. Developers have created an application that runs on SRV01. The application calls a service on SRV02 by IP address. You plan to migrate the application to Azure virtual machines (VMs). You have configured two VMs on a single subnet in an Azure virtual network. You need to configure the two VMs with static internal IP addresses. What should you do? A. Run the New-AzureRMVMConfig PowerShell cmdlet. B. Run the Set-AzureSubnet PowerShell cmdlet. C. Modify the VM properties in the Azure Management Portal. D. Modify the IP properties in Windows Network and Sharing Center. E. Run the Set-AzureStaticVNetIP PowerShell cmdlet. **
** C. Modify the VM properties in the Azure Management Portal. **Explanation:** The most straightforward and currently supported method to assign static internal IP addresses to Azure VMs is through the Azure portal. While PowerShell can be used (as shown in the discussion), the portal provides a user-friendly interface for this task. `Set-AzureStaticVNetIP` is outdated and deprecated, as noted in the discussion. The other options are incorrect because they don't directly address assigning static internal IPs to the VMs. **Why other options are incorrect:** * **A. Run the New-AzureRMVMConfig PowerShell cmdlet:** This cmdlet is for general VM configuration, not specifically for IP address management. * **B. Run the Set-AzureSubnet PowerShell cmdlet:** This cmdlet manages subnets within a virtual network, not the IP addresses assigned to individual VMs. * **D. Modify the IP properties in Windows Network and Sharing Center:** This would only modify the IP configuration within the VM's operating system, not the Azure-managed IP address. * **E. Run the Set-AzureStaticVNetIP PowerShell cmdlet:** This cmdlet is obsolete and should not be used for new VMs. **Note:** There is a disagreement in the discussion regarding the correct answer. Some users suggest option E (`Set-AzureStaticVNetIP`), but this is outdated and deprecated. The accepted answer, and the most reliable approach for current Azure environments, is option C.
405
[View Question](https://www.examtopics.com/discussions/databricks/view/57078-exam-az-104-topic-1-question-14-discussion/) Your company has an Azure subscription that includes a storage account, a resource group, a blob container, and a file share. A colleague named Jon Ross uses a single Azure Resource Manager (ARM) template to deploy a virtual machine and an additional Azure Storage account. You want to review the ARM template that was used by Jon Ross. Solution: You access the Resource Group blade. Does the solution meet the goal? A. Yes B. No
A. Yes The solution meets the goal because the Resource Group blade in the Azure portal provides access to the deployment history of a resource group. Since Jon Ross deployed the VM and storage account using an ARM template within a resource group, the deployment history within that resource group will contain the ARM template used. By accessing the Resource Group blade, you can review this deployment history and therefore review the ARM template. Why other options are incorrect: B. No - This is incorrect because the Resource Group blade does, in fact, provide a method to review the ARM template used for deployments within that resource group. Note: While the provided discussion overwhelmingly supports option A, it is still important to remember that the exact path to accessing and viewing the ARM template within the deployment history might vary slightly depending on the Azure portal version and configuration.
406
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57081-exam-az-104-topic-1-question-9-discussion/) Your company has an Azure Active Directory (Azure AD) tenant named weyland.com that is configured for hybrid coexistence with the on-premises Active Directory domain. You have a server named DirSync1 that is configured as a DirSync server. You create a new user account in the on-premise Active Directory. You now need to replicate the user information to Azure AD immediately. Solution: You run the `Start-ADSyncSyncCycle -PolicyType Initial` PowerShell cmdlet. Does the solution meet the goal? A. Yes B. No **
** B. No The solution does not meet the goal because using `Start-ADSyncSyncCycle -PolicyType Initial` performs a full synchronization of all directory data, which is not immediate. To replicate the user information immediately, the `Start-ADSyncSyncCycle -PolicyType Delta` cmdlet should be used, as this only synchronizes changes since the last sync. The discussion shows significant disagreement on the correct answer, with some arguing that a full initial sync *could* eventually include the new user, and others correctly stating that a delta sync is required for immediacy. This answer reflects the majority opinion and the more technically accurate approach for immediate replication. **Why other options are incorrect:** * **A. Yes:** This is incorrect because using `-PolicyType Initial` will perform a full synchronization, which takes considerably longer than a delta synchronization. The question specifies the need for *immediate* replication.
407
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57109-exam-az-104-topic-1-question-30-discussion/) Your company has a Microsoft SQL Server Always On availability group configured on their Azure virtual machines (VMs). You need to configure an Azure internal load balancer as a listener for the availability group. Solution: You enable Floating IP. Does the solution meet the goal? A. Yes B. No **
** A. Yes Enabling Floating IP on the Azure internal load balancer can meet the goal of configuring the load balancer as a listener for the SQL Server Always On availability group. Floating IP ensures that the IP address used by the backend pool remains consistent even if a VM restarts or is replaced. This consistency is crucial for maintaining the availability group listener. The load balancer routes traffic based on rules, and with direct server return (enabled by Floating IP), only the instance currently owning the listener resource receives traffic. **Why other options are incorrect:** B. No is incorrect because enabling Floating IP is a valid and commonly used solution for this scenario, as confirmed by multiple users in the discussion. The discussion highlights that Floating IP, often used with direct server return, is a suitable method to maintain a consistent IP address for the availability group listener, even during failover. **Note:** While the discussion overwhelmingly supports option A, there's no explicit counter-argument presented to definitively disprove the possibility of other solutions existing or option B being correct under specific, unstated circumstances. The provided information strongly favors option A as the correct answer within the context of the given scenario.
408
[View Question](https://www.examtopics.com/discussions/databricks/view/57111-exam-az-104-topic-1-question-3-discussion/) Your company has an Azure Active Directory (Azure AD) subscription. You want to implement an Azure AD conditional access policy. The policy must be configured to require members of the Global Administrators group to use Multi-Factor Authentication and an Azure AD-joined device when they connect to Azure AD from untrusted locations. Solution: You access the Azure portal to alter the session control of the Azure AD conditional access policy. Does the solution meet the goal? A. Yes B. No
B. No The solution does not meet the goal because simply accessing the Azure portal to alter session control is insufficient. The question requires a conditional access policy that specifically targets Global Administrators, enforces MFA, *and* requires Azure AD-joined devices for access from untrusted locations. Altering session control is a part of the process, but it doesn't complete the necessary configuration to fulfill all the stated requirements. The solution omits crucial steps of defining the specific conditions (users, locations, devices) and the corresponding access controls (MFA, device type) within the conditional access policy. Why other options are incorrect: A. Yes - This is incorrect because, as explained above, accessing the Azure portal and altering session control is only one step in a multi-step process to create the required conditional access policy. The suggested solution lacks the necessary configuration to meet all requirements of the policy. Note: The discussion shows some disagreement on the exact implementation steps. However, the consensus is that simply accessing the Azure portal to alter session control is not sufficient to achieve the specified policy requirements.
409
[View Question](https://www.examtopics.com/discussions/databricks/view/57225-exam-az-104-topic-1-question-37-discussion/) Your company has an Azure subscription that includes a Recovery Services vault. You want to use Azure Backup to schedule a backup of your company's virtual machines (VMs) to the Recovery Services vault. Which of the following VMs can you back up? Choose all that apply. A. VMs that run Windows 10. B. VMs that run Windows Server 2012 or higher. C. VMs that have NOT been shut down. D. VMs that run Debian 8.2+. E. VMs that have been shut down.
A, B, C, D, and E. Azure Backup supports backups of VMs running Windows 10, Windows Server 2012 or higher, Debian 8.2+, and VMs in both running and shut-down states. The provided Microsoft documentation and user comments support this. WHY OTHER OPTIONS ARE INCORRECT (if applicable): There are no incorrect options according to the provided information and user discussion. Note that there is some disagreement in the discussion regarding Windows 10 support, but the prevailing and supported answer includes it.
410
[View Question](https://www.examtopics.com/discussions/databricks/view/57347-exam-az-104-topic-1-question-36-discussion/) You administer a solution in Azure that is currently having performance issues. You need to find the cause of the performance issues pertaining to metrics on the Azure infrastructure. Which of the following is the tool you should use? A. Azure Traffic Analytics B. Azure Monitor C. Azure Activity Log D. Azure Advisor
The correct answer is B. Azure Monitor. Azure Monitor is a comprehensive monitoring service that collects and analyzes telemetry from your Azure resources and applications. It provides performance metrics, logs, and insights that are crucial for identifying and troubleshooting performance bottlenecks within your Azure infrastructure. It's designed specifically for this purpose. Why other options are incorrect: * **A. Azure Traffic Analytics:** This tool focuses on analyzing network traffic within your Azure environment. While network issues can impact performance, it doesn't provide the broad range of metrics needed for general infrastructure performance analysis. * **C. Azure Activity Log:** This service logs administrative actions performed on your Azure resources. It's valuable for auditing and security, but it doesn't contain the performance metrics you need to diagnose performance issues. * **D. Azure Advisor:** This provides recommendations for improving the overall performance, security, and cost of your Azure resources. It's helpful for proactive optimization, but it's not a diagnostic tool for investigating existing performance problems. Note: The discussion shows overwhelming agreement that Azure Monitor is the correct answer.
411
[View Question](https://www.examtopics.com/discussions/databricks/view/57418-exam-az-104-topic-16-question-1-discussion/) HOTSPOT - You need to implement Role1. Which command should you run before you create Role1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0058900009.png) (Image shows a blank area for answer selection) Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0058900010.png) (Image shows `Get-AzRoleDefinition -Name "Reader" | ConvertTo-Json` selected)
The command to run before creating Role1 is `Get-AzRoleDefinition -Name "Reader" | ConvertTo-Json`. This command retrieves the definition of the built-in "Reader" role in JSON format. This JSON definition then serves as a template or basis for creating the custom Role1, allowing you to modify it as needed. The `ConvertTo-Json` part is crucial as it transforms the output into a format suitable for creating a new custom role. Several users in the discussion confirm this as the correct answer, and one user explicitly states that this question appeared on the AZ-104 exam. WHY OTHER OPTIONS ARE INCORRECT: The question doesn't provide other options, so there are no other options to discuss. The focus is on the single necessary step before creating a custom role based on an existing one.
412
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57481-exam-az-104-topic-1-question-13-discussion/) Your company has an Azure subscription that includes a storage account, a resource group, a blob container, and a file share. A colleague named Jon Ross makes use of a solitary Azure Resource Manager (ARM) template to deploy a virtual machine and an additional Azure Storage account. You want to review the ARM template that was used by Jon Ross. Solution: You access the Virtual Machine blade. Does the solution meet the goal? A. Yes B. No **
** B. No The solution does not meet the goal. Accessing the Virtual Machine blade only provides information about the virtual machine itself, not the ARM template used to deploy it. To review the ARM template, you need to access the deployment history of the resource group where the virtual machine and storage account were deployed. This history will show all deployments, including the ARM template used. The consensus among users in the discussion is that option B is correct.
413
[View Question](https://www.examtopics.com/discussions/databricks/view/57482-exam-az-104-topic-1-question-15-discussion/) Your company has an azure subscription that includes a storage account, a resource group, a blob container and a file share. A colleague named Jon Ross makes use of a solitary Azure Resource Manager (ARM) template to deploy a virtual machine and an additional Azure Storage account. You want to review the ARM template that was used by Jon Ross. Solution: You access the Container blade. Does the solution meet the goal? A. Yes B. No
B. No. Accessing the Container blade will only show you the contents of the blob container; it does not provide access to the ARM template used to deploy resources. To review the ARM template used by Jon Ross, you would need to access the Resource Group blade (where the virtual machine and new storage account were deployed) and examine the deployment history. This history will show the ARM template used in the deployment. The discussion shows a consensus that the solution is incorrect. Several users independently explain why accessing the Container blade is insufficient to meet the goal.
414
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57485-exam-az-104-topic-1-question-25-discussion/) Your company's Azure subscription includes two Azure networks named VirtualNetworkA and VirtualNetworkB. VirtualNetworkA includes a VPN gateway that is configured to use static routing. A site-to-site VPN connection exists between your company's on-premises network and VirtualNetworkA. You have configured a point-to-site VPN connection to VirtualNetworkA from a workstation running Windows 10. After configuring virtual network peering between VirtualNetworkA and VirtualNetworkB, you confirm that you are able to access VirtualNetworkB from the company's on-premises network. However, you cannot establish a connection to VirtualNetworkB from the Windows 10 workstation. You need to ensure that a connection to VirtualNetworkB can be established from the Windows 10 workstation. Solution: You choose the Allow gateway transit setting on VirtualNetworkB. Does this solution meet the goal? A. Yes B. No **
** B. No **Explanation:** The proposed solution of enabling "Allow gateway transit" on VirtualNetworkB is insufficient to establish a connection from the Windows 10 workstation to VirtualNetworkB. After changes to the Azure network topology (like adding peering), the point-to-site (P2S) VPN client configuration on the Windows 10 workstation needs to be re-downloaded and reinstalled. Simply enabling "Allow gateway transit" doesn't update the client's understanding of the network's new configuration. The client needs to be reconfigured to recognize the newly established peering connection. **Why other options are incorrect:** A. Yes is incorrect because it doesn't address the necessity of updating the P2S VPN client configuration on the Windows 10 workstation after the network topology change. Enabling "Allow gateway transit" alone won't solve the connection problem. The discussion shows a consensus that option B is correct, although there is some initial uncertainty reflected in the early posts.
415
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57486-exam-az-104-topic-1-question-24-discussion/) Your company's Azure subscription includes two Azure networks named VirtualNetworkA and VirtualNetworkB. VirtualNetworkA includes a VPN gateway that is configured to use static routing. A site-to-site VPN connection exists between your company's on-premises network and VirtualNetworkA. You have configured a point-to-site VPN connection to VirtualNetworkA from a workstation running Windows 10. After configuring virtual network peering between VirtualNetworkA and VirtualNetworkB, you confirm that you are able to access VirtualNetworkB from the company's on-premises network. However, you find that you cannot establish a connection to VirtualNetworkB from the Windows 10 workstation. You have to make sure that a connection to VirtualNetworkB can be established from the Windows 10 workstation. Solution: You choose the Allow gateway transit setting on VirtualNetworkA. Does the solution meet the goal? A. Yes B. No **
** B. No **Explanation:** The provided solution, enabling "Allow gateway transit" on VirtualNetworkA, is insufficient to resolve the connectivity issue from the Windows 10 workstation to VirtualNetworkB. While this setting allows traffic from the on-premises network to traverse VirtualNetworkA to reach VirtualNetworkB, it doesn't automatically update the point-to-site VPN configuration on the Windows 10 workstation. The Windows 10 VPN client needs to be re-downloaded and reinstalled to recognize the network topology changes after the virtual network peering was implemented. **Why other options are incorrect:** * **A. Yes:** This is incorrect because simply enabling "Allow gateway transit" doesn't automatically update the client configuration on the Windows 10 workstation. The client needs to be reconfigured to reflect the new peering. **Note:** There is some disagreement in the discussion regarding whether enabling "Allow gateway transit" on VirtualNetworkB is also necessary. However, the core consensus is that reinstalling the VPN client on the Windows 10 workstation is the crucial step to achieve connectivity.
416
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57495-exam-az-104-topic-1-question-27-discussion/) Your company has virtual machines (VMs) hosted in Microsoft Azure. The VMs are located in a single Azure virtual network named VNet1. The company has users that work remotely. The remote workers require access to the VMs on VNet1. You need to provide access for the remote workers. What should you do? A. Configure a Site-to-Site (S2S) VPN. B. Configure a VNet-to-VNet VPN. C. Configure a Point-to-Site (P2S) VPN. D. Configure DirectAccess on a Windows Server 2012 server VM. E. Configure a Multi-Site VPN **
** C. Configure a Point-to-Site (P2S) VPN. A Point-to-Site (P2S) VPN is the most appropriate solution because it allows individual remote users to connect securely to the Azure virtual network (VNet1) from their personal devices (laptops, tablets, etc.) over the internet. This directly addresses the scenario of remote workers needing access to VMs within the VNet. **Why other options are incorrect:** * **A. Site-to-Site (S2S) VPN:** This is suitable for connecting an entire on-premises network to an Azure virtual network. It's not ideal for individual remote workers connecting from various locations. * **B. VNet-to-VNet VPN:** This connects two Azure virtual networks. The problem describes remote users, not another VNet. * **D. DirectAccess on a Windows Server 2012 server VM:** DirectAccess is an older technology and not the recommended approach for connecting remote users to Azure VMs. It requires specific server configuration and management, which is less convenient than P2S VPN. * **E. Multi-Site VPN:** This is for connecting multiple on-premises networks or branch offices, not individual remote users. **Note:** While the suggested answer is overwhelmingly C, some discussion mentions that a Site-to-Site VPN *could* be suitable if all remote workers were in a single location. However, the question explicitly states that the workers are "remote," implying they are dispersed geographically, thus making P2S the more fitting and generally accepted answer.
417
**** [View Question](https://www.examtopics.com/discussions/databricks/view/57646-exam-az-104-topic-1-question-28-discussion/) Your company has a Microsoft SQL Server Always On availability group configured on their Azure virtual machines (VMs). You need to configure an Azure internal load balancer as a listener for the availability group. Solution: You create an HTTP health probe on port 1433. Does the solution meet the goal? A. Yes B. No **
** B. No The solution does not meet the goal because port 1433 uses the TDS (Tabular Data Stream) protocol for SQL Server communication, not HTTP. An HTTP health probe on this port will fail. To correctly configure the Azure internal load balancer, a TCP health probe should be created on port 1433. **Why other options are incorrect:** * **A. Yes:** This is incorrect because using an HTTP probe on port 1433, which uses the TDS protocol, will not work. The load balancer needs a TCP probe to correctly check the health of the SQL Server instances. There is a consensus among the users in the discussion that option B is the correct answer.
418
[View Question](https://www.examtopics.com/discussions/databricks/view/57782-exam-az-104-topic-1-question-6-discussion/) Your company uses Multi-Factor Authentication (MFA) for users outside the office, configured with the "Per Authentication" usage model. After acquiring a smaller business and adding its employees to Azure Active Directory (Azure AD), you need these new employees to also use MFA. To achieve this, the "Per Enabled User" setting must be used. Solution: You reconfigure the existing usage model via the Azure portal. Does the solution meet the goal? A. Yes B. No
B. No. The solution does not meet the goal because, according to the discussion and linked Microsoft documentation, you cannot change the usage model (per enabled user or per authentication) after an MFA provider is created. Reconfiguring the existing usage model in the Azure portal will not work. To achieve the desired outcome, a new MFA provider would need to be created with the "Per Enabled User" setting; however, this is not possible according to the information provided, as it was stated that creating new MFA providers was no longer possible as of September 1st, 2018. There is agreement among the users in the discussion that the answer is B and that changing the usage model after MFA provider creation is not possible.
419
**** [View Question](https://www.examtopics.com/discussions/databricks/view/59109-exam-az-104-topic-1-question-7-discussion/) Your company's Azure solution uses Multi-Factor Authentication (MFA) for users when they are not in the office. The Per Authentication option is configured as the usage model. After acquiring a smaller business and adding the new staff to Azure Active Directory (Azure AD), you're told these employees should also use MFA. To achieve this, the Per Enabled User setting must be set for the usage model. Solution: You reconfigure the existing usage model via the Azure CLI. Does the solution meet the goal? A. Yes B. No **
** B. No The solution does not meet the goal because you cannot change the usage model (Per Authentication or Per Enabled User) of an existing MFA provider after it's created. To enable MFA for the new employees, you must create a *new* MFA provider with the desired "Per Enabled User" setting. The existing provider with the "Per Authentication" setting cannot be modified to change its usage model. **Why other options are incorrect:** * **A. Yes:** This is incorrect because the described method of reconfiguring the existing usage model via the Azure CLI is not possible. The consensus among the discussion participants is that changing the usage model after provider creation is not supported. **Note:** There is a strong consensus among the users in the discussion that option B is correct. There is one dissenting opinion (R_yan) that suggests a different approach altogether, but this approach does not address the original question of modifying the existing MFA provider's usage model.
420
**** [View Question](https://www.examtopics.com/discussions/databricks/view/59581-exam-az-104-topic-1-question-29-discussion/) Your company has a Microsoft SQL Server Always On availability group configured on their Azure virtual machines (VMs). You need to configure an Azure internal load balancer as a listener for the availability group. Solution: You set Session persistence to Client IP. Does the solution meet the goal? A. Yes B. No **
** B. No The solution does not meet the goal. Setting session persistence to Client IP defeats the purpose of a load balancer for an Always On Availability Group. A load balancer distributes traffic across multiple servers to improve availability and performance. Session persistence would force all traffic from a specific client to a single server, negating the failover capabilities of the availability group. Instead of session persistence, a different approach such as configuring an availability group listener and using direct server return is necessary to correctly handle traffic routing to the availability group. The discussion highlights the need to configure an "availability group listener" and points out the correct approach would use a floating IP (direct server return) with a health probe on different ports than the main SQL Server ports. **Why other options are incorrect:** A. Yes - This is incorrect because setting session persistence is not the correct approach for load balancing an Always On Availability Group. As explained above, it prevents the load balancer from distributing the workload and defeats the purpose of high availability. The consensus in the discussion is that option B is correct.
421
[View Question](https://www.examtopics.com/discussions/databricks/view/62784-exam-az-104-topic-4-question-72-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0032600001.png) All virtual machines run Windows Server 2016. On VM1, you back up a folder named Folder1 as shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0032600002.jpg) You plan to restore the backup to a different virtual machine. You need to restore the backup to VM2. What should you do first? A. From VM1, install the Windows Server Backup feature. B. From VM2, install the Microsoft Azure Recovery Services Agent. C. From VM1, install the Microsoft Azure Recovery Services Agent. D. From VM2, install the Windows Server Backup feature.
B. From VM2, install the Microsoft Azure Recovery Services Agent. The question describes backing up a folder from VM1 and restoring it to VM2. To restore the backup to VM2, the target machine (VM2) needs the Microsoft Azure Recovery Services Agent (MARS) installed. This agent is responsible for communicating with Azure and facilitating the restore process. The backup itself was likely created using Windows Server Backup, but the restore operation relies on the MARS agent on the destination VM. Why other options are incorrect: * **A. From VM1, install the Windows Server Backup feature:** This is irrelevant because the backup has already been created on VM1. The restore process takes place on VM2. * **C. From VM1, install the Microsoft Azure Recovery Services Agent:** While the agent might be on VM1 (depending on *how* the backup was initially performed), installing it there is unnecessary for the restore operation on VM2. * **D. From VM2, install the Windows Server Backup feature:** While Windows Server Backup is used for creating backups, it's not required on the *destination* machine for restoring the backup. The restore process uses MARS. Note: The provided discussion only supports option B. There is no conflicting opinion presented.
422
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62787-exam-az-104-topic-3-question-42-discussion/) You have an Azure Storage account named storage1 that contains a blob container. The blob container has a default access tier of Hot. Storage1 contains a container named container1. You create lifecycle management rules in storage1 as shown in the following table. | Rule | Definition | |---|---| | Rule1 | Archive blobs after 2 days of inactivity. Delete blobs after 9 days of inactivity. | | Rule2 | Move blobs to Cool tier after 3 days of inactivity. Move blobs to Archive tier after 9 days of inactivity. | You perform the actions shown in the following table. | File Name | Last Modified | |-----------------|----------------| | Dep1File1.docx | 8 days ago | | File2.docx | 5 days ago | | File3.docx | 8 days ago | For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Can you read Dep1File1.docx on the 10th day? Can you read File2.docx on the 10th day? Can you read File3.docx on the 10th day? **
** * **Dep1File1.docx:** No. Rule 1 archives the file after 2 days of inactivity and deletes it after 9 days. Since it was last modified 8 days ago, it would have been archived and then deleted before the 10th day. * **File2.docx:** Yes. Rule 2 moves the file to the Cool tier after 3 days of inactivity and to the Archive tier after 9 days. Since it was last modified 5 days ago, it will be in the Cool tier on the 10th day, which is readable. * **File3.docx:** Yes. Same logic as File2.docx. It will be in the Cool tier on the 10th day, and therefore readable. **Explanation:** The question tests understanding of Azure Blob Storage lifecycle management rules and the read accessibility of blobs in different tiers. The key is to understand that archive-tier blobs are offline and require rehydration to be read, unlike Cool-tier blobs. The discussion highlights a disagreement with the original "suggested answer," emphasizing that archive-tier blobs are unreadable without rehydration. The answer above reflects this crucial detail and corrects the possible misunderstanding.
423
[View Question](https://www.examtopics.com/discussions/databricks/view/62800-exam-az-104-topic-3-question-41-discussion/) You have an Azure Storage account named storage1 that contains a blob container named container1. You need to prevent new content added to container1 from being modified for one year. What should you configure? A. the access tier B. an access policy C. the Access control (IAM) settings D. the access level
B. an access policy Explanation: The correct answer is B because configuring a time-based retention policy on the blob container achieves the desired outcome. A time-based retention policy allows creation and reading of blobs but prevents modification or deletion for a specified period (in this case, one year). After the retention period expires, the blobs can be deleted but not overwritten. This directly addresses the requirement of preventing modification of newly added content for a year. Why other options are incorrect: * **A. the access tier:** Access tiers affect storage cost and performance but don't directly control the immutability or modification of blobs. * **C. the Access control (IAM) settings:** IAM controls access permissions (who can read, write, delete), not the immutability of data for a set duration. * **D. the access level:** Similar to IAM, this refers to permissions and not the time-based immutability needed. Note: The discussion shows a strong consensus on answer B.
424
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62817-exam-az-104-topic-1-question-5-discussion/) You are planning to deploy an Ubuntu Server virtual machine to your company's Azure subscription. You are required to implement a custom deployment that includes adding a particular trusted root certification authority (CA). Which of the following should you use to create the virtual machine? A. The New-AzureRmVm cmdlet. B. The New-AzVM cmdlet. C. The Create-AzVM cmdlet. D. The az vm create command. **
** D. The az vm create command. The discussion strongly favors option D, citing the ability to use cloud-init scripts with the `az vm create` command to achieve custom configurations like adding a trusted root CA. This is supported by provided links to Microsoft documentation regarding custom data for VMs. Options A and B are PowerShell cmdlets, and while `New-AzVM` is the current PowerShell method, the discussion indicates that passing custom data (necessary for adding a CA) is not readily supported through these cmdlets. Option C, `Create-AzVM`, is not a valid Azure command. **Why other options are incorrect:** * **A. The New-AzureRmVm cmdlet:** This is an outdated PowerShell cmdlet. * **B. The New-AzVM cmdlet:** While the current PowerShell cmdlet, the discussion suggests it lacks the direct support for easily incorporating custom data (like a cloud-init script for adding a CA) that `az vm create` offers. * **C. The Create-AzVM cmdlet:** This cmdlet does not exist in the Azure CLI or PowerShell. **Note:** There is a dissenting opinion in the discussion suggesting that `New-AzVM` (option B) is the correct answer, however, this view is less supported than the consensus favoring option D.
425
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62873-exam-az-104-topic-11-question-1-discussion/) HOTSPOT - You need to create container1 and share1. Which storage accounts should you use for each resource? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0022600001.jpg) *(Image not provided, but the text indicates a Hotspot question requiring selection of storage accounts for "container1" and "share1")* **
** Based on the provided text, `container1` (a blob container) should be created in either `storage2` (General-purpose v2) or `storage3` (likely a legacy Standard Blob Storage account). `share1` (a file share) should be created in `storage2` (General-purpose v2). The explanation hinges on the fact that only General-purpose v2 storage accounts support tiering for both blob storage and file shares. Legacy Standard Blob Storage also supports tiering for blobs, but the question implies a need for tier functionality. Premium storage accounts do not support tiering. The discussion highlights the importance of differentiating between various storage account types and their respective capabilities. **Why other options are incorrect:** The provided text does not list all possible options, only discussing the relevance of General-purpose v2 and legacy Standard Blob Storage accounts regarding tier support. General-purpose v1 and Premium storage accounts are explicitly ruled out because they lack the required tiering functionality. **Note:** The discussion mentions some disagreement on the exact interpretation of the question regarding the type of Blob storage (Standard vs. Premium). The answer provided above leans towards the interpretation that the question implies a need for tiering functionality.
426
[View Question](https://www.examtopics.com/discussions/databricks/view/62874-exam-az-104-topic-1-question-19-discussion/) Your company has an Azure subscription. You need to deploy a number of Azure virtual machines (VMs) using Azure Resource Manager (ARM) templates. You have been informed that the VMs will be included in a single availability set. You are required to make sure that the ARM template you configure allows for as many VMs as possible to remain accessible in the event of fabric failure or maintenance. Which of the following is the value that you should configure for the `platformUpdateDomainCount` property? A. 10 B. 20 C. 30 D. 40
B. 20 Explanation: The correct answer is B because the documentation states that each availability set can be configured with up to three fault domains and twenty update domains. To maximize the number of VMs that remain accessible during updates or failures, you should utilize the maximum number of update domains. A higher `platformUpdateDomainCount` allows for more granular updates and reduces the impact of maintenance or failures. Why other options are incorrect: * **A. 10:** This is less than the maximum allowed and would not maximize availability. * **C. 30:** This exceeds the maximum allowed update domains for an availability set. * **D. 40:** This also exceeds the maximum allowed update domains for an availability set. Note: The discussion shows unanimous agreement on the correct answer, B.
427
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62875-exam-az-104-topic-1-question-20-discussion/) DRAG DROP - You have downloaded an Azure Resource Manager (ARM) template to deploy numerous virtual machines (VMs). The ARM template is based on a current VM, but must be adapted to reference an administrative password. You need to make sure that the password cannot be stored in plain text. You are preparing to create the necessary components to achieve your goal. Which of the following should you create to achieve your goal? Answer by dragging the correct option from the list to the answer area. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0001800001.png) **
** To securely store and manage the administrative password for the VMs deployed via the ARM template without storing it in plain text, you need to create two components: 1. **An Azure Key Vault:** This service provides secure storage for cryptographic keys, certificates, and secrets, including passwords. Storing the password in a Key Vault prevents it from being exposed in plain text within the ARM template. 2. **An Access Policy (or RBAC):** This controls access to the Key Vault and its secrets. It defines which users, applications, or services can access the password stored within the Key Vault. While the original answer suggests Access Policies, the discussion points out that Role-Based Access Control (RBAC) is the more modern and recommended approach for managing access. **Why other options are incorrect:** The question specifically asks what components need to be *created*. While the ARM template itself will utilize the Key Vault, it's not something that needs to be created in this scenario; it already exists (implicitly). **Note on Disagreement:** The discussion highlights a shift in best practices. While the suggested answer emphasizes using Access Policies, the discussion correctly notes that Role-Based Access Control (RBAC) is now the preferred method for managing Key Vault access. Both approaches achieve the goal of secure password management, but RBAC is considered the more modern and efficient solution.
428
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62878-exam-az-104-topic-1-question-22-discussion/) Your company has an Azure Active Directory (Azure AD) tenant that is configured for hybrid coexistence with the on-premises Active Directory domain. You plan to deploy several new virtual machines (VMs) in Azure. The VMs will have the same operating system and custom software requirements. You configure a reference VM in the on-premise virtual environment. You then generalize the VM to create an image. You need to upload the image to Azure to ensure that it is available for selection when you create the new Azure VMs. Which PowerShell cmdlets should you use? A. Add-AzVM B. Add-AzVhd C. Add-AzImage D. Add-AzImageDataDisk **
** The correct answer is **B. Add-AzVhd**. `Add-AzVhd` uploads a virtual hard disk (VHD) from an on-premises machine to Azure. This is the appropriate cmdlet for uploading a generalized VM image to Azure. **Why other options are incorrect:** * **A. Add-AzVM:** This cmdlet creates new VMs, not uploads images. * **C. Add-AzImage:** This cmdlet does not exist. The correct cmdlet is `New-AzImage`, which *creates* a new image in Azure, not uploads an existing one from on-premises. The discussion shows disagreement on the existence of this cmdlet, with some users stating that `New-AzImage` is the correct cmdlet. * **D. Add-AzImageDataDisk:** This cmdlet adds a data disk to an *existing* image object in Azure, not uploads a complete image from another source. **Note:** There is disagreement in the discussion regarding the existence and function of the `Add-AzImage` cmdlet. The consensus leans towards `New-AzImage` as the correct cmdlet for creating a new image from within Azure, but that's distinct from uploading a pre-existing image. The most suitable option based on the provided scenario and documentation links in the discussion remains `Add-AzVhd`.
429
[View Question](https://www.examtopics.com/discussions/databricks/view/62880-exam-az-104-topic-1-question-23-discussion/) DRAG DROP - Your company has an Azure subscription that includes a number of Azure virtual machines (VMs), which are all part of the same virtual network. Your company also has an on-premises Hyper-V server that hosts a VM, named VM1, which must be replicated to Azure. Which of the following objects that must be created to achieve this goal? Answer by dragging the correct option from the list to the answer area. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0002200001.png) *(Image shows a drag-and-drop interface with options: Azure Recovery Services Vault, Hyper-V site, Storage Account, Replication Policy)*
To replicate the on-premises Hyper-V VM (VM1) to Azure, you need to create the following: 1. **Hyper-V site:** This registers your on-premises Hyper-V server with Azure Site Recovery, enabling communication and management of the replication process. 2. **Azure Recovery Services Vault:** This acts as a central repository for managing backups and disaster recovery operations, including storing the replication data and recovery points for VM1. 3. **Replication Policy:** This defines the settings for replication, such as recovery point objective (RPO), recovery time objective (RTO), and replication frequency. This determines how often data is replicated and the acceptable data loss in case of a failure. **Why other options are incorrect:** * **Storage Account:** While a storage account is used *within* Azure Site Recovery to store replicated data, it's not something you *directly* create for the initial setup of replication from a Hyper-V server. The Azure Recovery Services Vault manages the storage indirectly. The storage account is created as part of the overall process managed by the vault and replication policy. There is a general consensus among the discussion participants that these three are the necessary components. However, the lack of specific technical details in the question or further explanations in the discussion prevents us from definitively eliminating the possibility of other methods or configurations.
430
[View Question](https://www.examtopics.com/discussions/databricks/view/62888-exam-az-104-topic-2-question-16-discussion/) You have an Azure subscription named Subscription1 that contains a virtual network named VNet1. VNet1 is in a resource group named RG1. Subscription1 has a user named User1. User1 has the following roles: ✑ Reader ✑ Security Admin ✑ Security Reader You need to ensure that User1 can assign the Reader role for VNet1 to other users. What should you do? A. Remove User1 from the Security Reader and Reader roles for Subscription1. B. Assign User1 the User Access Administrator role for VNet1. C. Assign User1 the Network Contributor role for VNet1. D. Assign User1 the Network Contributor role for RG1.
The correct answer is B. Assign User1 the User Access Administrator role for VNet1. The User Access Administrator role allows the user to manage user access to Azure resources, including assigning roles to other users within the scope of that specific resource (VNet1 in this case). Options A, C, and D do not grant the necessary permissions to assign roles to other users. Option A removes existing roles without granting any new permissions relevant to the task. Option C and D grant network management permissions (Network Contributor) but not the ability to assign roles. The Security Admin and Security Reader roles, while providing security-related access, do not grant the ability to assign roles to other users for specific resources. While the discussion shows some minor disagreement on the exact functionality of the Network Contributor role, the consensus points to option B as the correct solution to allow User1 to assign the Reader role to other users for VNet1.
431
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62896-exam-az-104-topic-2-question-44-discussion/) HOTSPOT - You have an Azure subscription that contains a storage account named storage1. The subscription is linked to an Azure Active Directory (Azure AD) tenant named contoso.com that syncs to an on-premises Active Directory domain. The domain contains the security principals shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0009600001.png) *(Image shows a table of on-premises security principals including users and computers)* In Azure AD, you create a user named User2. The storage1 account contains a file share named share1 and has the following configurations. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0009600002.png) *(Image shows storage account configurations including Azure AD Domain Services enabled)* For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0009700001.png) *(Image shows a table with three statements: 1. User1 can access share1. 2. Computer1 can access share1. 3. User2 can access share1.)* **
** The correct answer is likely Y-N-N. However, there is significant disagreement in the discussion regarding the accessibility of User2 to share1. * **User1 can access share1 (Yes):** User1 is a synced user from the on-premises Active Directory and has permissions assigned through Azure AD Domain Services. Since share1 has Azure AD Domain Services enabled, User1, being a member of the domain, can access the share. * **Computer1 can access share1 (No):** While Computer1 is a member of the on-premises domain, computer objects cannot be directly assigned share-level permissions in Azure Files using Azure AD Domain Services. They can access the share if they use the default share-level permissions. * **User2 can access share1 (No):** User2 is a cloud-only user created directly in Azure AD and not synced from the on-premises directory. Cloud-only users cannot be directly assigned share-level permissions in Azure Files using Azure AD Domain Services. **Why other options are incorrect:** The discussion shows significant disagreement, particularly regarding User2's access. Some argue that a Y-N-Y answer is possible if User2 is part of a group with permissions. The ambiguity stems from the lack of explicit information about User2's synchronization status and whether they are part of any groups with access to share1. The most conservative answer, based solely on the provided information and general Azure AD Domain Services behavior, is Y-N-N.
432
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62934-exam-az-104-topic-4-question-63-discussion/) HOTSPOT - You have an Azure subscription. You plan to use Azure Resource Manager templates to deploy 50 Azure virtual machines that will be part of the same availability set. You need to ensure that as many virtual machines as possible are available if the fabric fails or during servicing. How should you configure the template? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0031400001.png) **
** The correct configuration is to set `platformFaultDomainCount` to 3 and `platformUpdateDomainCount` to 20. * **`platformFaultDomainCount = 3`:** This distributes the VMs across three fault domains. If one fault domain fails, at least 33 VMs will remain available (16 or 17 per domain). Spreading across more fault domains maximizes availability in case of a hardware failure affecting an entire domain. The East US region, according to user comments, supports 3 fault domains. * **`platformUpdateDomainCount = 20`:** While not explicitly stated, the high number of update domains (20) is chosen to minimize the impact of planned maintenance. By spreading updates across many domains, only a small subset of VMs will be unavailable during updates at any one time. Using a high number assumes the region supports it. The exact optimal number may vary depending on region specific limits and will likely be less than a theoretical maximum. **Why other options are incorrect:** The question doesn't provide alternative options explicitly, but implicitly incorrect options would be using fewer fault domains or update domains. Using fewer fault domains increases the risk of a significant number of VMs becoming unavailable due to a single fault domain failure. Using fewer update domains extends the downtime for planned maintenance. **Note:** The discussion shows a consensus among users that 3 fault domains is optimal given the constraint to maximize availability. However, the optimal value for `platformUpdateDomainCount` is less clear, although a higher number is preferred for minimizing downtime from updates. The best value would depend on the specific region and availability set limits.
433
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62936-exam-az-104-topic-4-question-64-discussion/) You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an Azure Log Analytics workspace and configure the Agent configuration settings. You install the Microsoft Monitoring Agent on VM1. You create an alert in Azure Monitor and specify the Log Analytics workspace as the source. Does this meet the goal? A. Yes B. No **
** A. Yes This solution correctly addresses the problem. The steps outlined leverage Azure's monitoring capabilities to achieve the desired outcome: 1. **Log Analytics Workspace:** This acts as a central repository for collecting logs. 2. **Microsoft Monitoring Agent (MMA):** Installed on VM1, the MMA collects logs from the System event log and sends them to the Log Analytics workspace. 3. **Azure Monitor Alert:** Configured to monitor the Log Analytics workspace, this alert triggers when the specified condition (more than two error events within an hour) is met. The combination of these components effectively monitors the VM's System event log and generates an alert based on the defined criteria. **Why other options are incorrect:** B. No - This is incorrect because the described solution accurately uses Azure's monitoring tools to achieve the stated goal. **Note:** The discussion mentions some confusion regarding the installation of the Microsoft Monitoring Agent (whether it's a VM extension or a separate agent installation). The consensus, however, points to the correct approach involving installing the MMA on VM1 as a separate agent. The solution provided is considered correct in the context of this discussion.
434
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62956-exam-az-104-topic-4-question-29-discussion/) You have an app named App1 that runs on an Azure web app named webapp1. The developers at your company upload an update of App1 to a Git repository named Git1. Webapp1 has the deployment slots shown in the following table. | Slot Name | Status | | -------------- | ------- | | webapp1-prod | Running | | webapp1-test | Stopped | You need to ensure that the App1 update is tested before the update is made available to users. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Swap the slots B. Deploy the App1 update to webapp1-prod, and then test the update C. Stop webapp1-prod D. Deploy the App1 update to webapp1-test, and then test the update E. Stop webapp1-test **
** A and D The correct actions are to deploy the update to the test slot (D) and then swap the slots (A). * **Deploying to webapp1-test (D):** This allows you to test the updated App1 in a staging environment without impacting users on the production environment (webapp1-prod). This is a best practice for minimizing risk. * **Swapping the slots (A):** After successfully testing the update in the webapp1-test slot, swapping the slots promotes the tested update to the production slot (webapp1-prod), making it live for users. This process is atomic, minimizing downtime and risk. **Why other options are incorrect:** * **B:** Deploying directly to production (webapp1-prod) without testing is risky and could lead to a service outage or unexpected behavior. * **C:** Stopping webapp1-prod is unnecessary and disrupts user access. The swap operation handles the transition seamlessly. * **E:** Stopping webapp1-test is not necessary as it is already stopped and the deployment would start it during the deployment process. **Note:** The discussion shows overwhelming consensus that A and D are the correct answers. There is no significant disagreement on this point.
435
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62977-exam-az-104-topic-2-question-22-discussion/) You have an Azure subscription named Subscription1 that contains an Azure Log Analytics workspace named Workspace1. You need to view the error events from a table named Event. Which query should you run in Workspace1? A. Get-Event Event | where {$_.EventType == "error"} B. search in (Event) "error" C. select * from Event where EventType == "error" D. search in (Event) * | where EventType -eq "error" **
** B. `search in (Event) "error"` This is the correct Kusto Query Language (KQL) query for Log Analytics. It searches for the string "error" within the `Event` table across all fields. The discussion notes successful testing of this query within a Log Analytics workspace. **Why other options are incorrect:** * **A. `Get-Event Event | where {$_.EventType == "error"}`:** This uses PowerShell syntax, not KQL, which is the query language used in Azure Log Analytics. * **C. `select * from Event where EventType == "error"`:** This uses SQL syntax, which is also not compatible with Log Analytics. It would work in some SQL database environments, but not here. * **D. `search in (Event) * | where EventType -eq "error"`:** This combines elements of KQL and PowerShell syntax. `search in (Event) *` is valid KQL, but `-eq` is a PowerShell operator, not a KQL operator. The correct KQL equivalent would be `==`. **Note:** The discussion shows some disagreement on the validity of option B. Some users reported syntax errors when testing it in a live environment, while others successfully used it. The provided answer reflects the majority viewpoint and the explanation given by other users who reported success, indicating the importance of environment verification when testing KQL queries.
436
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62980-exam-az-104-topic-2-question-23-discussion/) You have an Azure subscription that contains a virtual network named VNET1 in the East US 2 region. A network interface named VM1-NI is connected to VNET1. You successfully deploy the following Azure Resource Manager template. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0006500001.jpg) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0006600001.png) **
** The correct answer is Yes, Yes, No. * **Box 1: Yes:** VM1 is deployed successfully and is connected to VNET1 as stated in the problem description. * **Box 2: Yes:** The ARM template shows the creation of VM2 and its network interface, VM2-NI. While the question doesn't explicitly state VM2-NI is connected to VNET1, the successful deployment of the template and the fact that it's in the same region (East US 2) strongly implies a connection to VNET1 (or a subnet within VNET1). The discussion highlights debate on this point; some argue that the absence of explicit connection details in the template leaves room for ambiguity. However, based on the “successful deployment” premise and common Azure practice, it’s the most logical conclusion. * **Box 3: No:** The template shows that VM1 is deployed in Availability Zone 1 and VM2 in Availability Zone 2. Therefore, they are not in the same Availability Zone. **Why other options are incorrect:** The discussion shows several differing opinions about whether VM2-NI is connected to VNET1. Some interpret the "successful deployment" to imply the connection, while others emphasize the lack of explicit mention of the connection in the template. The consensus among the highly upvoted responses leans towards a successful deployment implying connectivity. However, this remains a point of contention. The key to Box 3 remains the availability zone details within the template.
437
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62981-exam-az-104-topic-2-question-31-discussion/) You have an Azure Active Directory (Azure AD) tenant named adatum.com. Adatum.com contains the groups in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0007800001.jpg) *(Group 1: Rule: Member of "Group2"; Group 2: Rule: Member of "Group1"; Group 3: No rule shown)* You create two user accounts that are configured as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0007800002.png) *(User1: Member of Group2; User2: Member of Group1)* Of which groups are User1 and User2 members? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
* **User1:** Group 1 only. Because User1 is a member of Group 2, and Group 1's membership rule is "Member of Group 2", User1 automatically becomes a member of Group 1. There's no rule or indication User1 is a member of Group 3. * **User2:** Group 1 and Group 2. User2 is a member of Group 1. Because User2 is a member of Group 1, and Group 2's membership rule is "Member of Group 1", User2 automatically becomes a member of Group 2 as well. The discussion highlights a disagreement regarding the role of Office 365 licensing and whether it impacts group membership. The consensus among the majority of responders and confirmed exam experience suggests that licensing is irrelevant to this specific question; only the defined group membership rules matter. Therefore, the answer focuses solely on the rules provided within the question.
438
**** [View Question](https://www.examtopics.com/discussions/databricks/view/62990-exam-az-104-topic-2-question-45-discussion/) You have an Azure subscription named Subscription1 that contains a virtual network VNet1. You add the users in the following table. | User | Role | |------|----------------| | User1 | Owner | | User2 | Security Admin | | User3 | Network Contributor | Which user can perform each configuration? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. **1) Add a subnet to VNet1** **2) Assign a user the Reader role to VNet1** **
** 1) **Add a subnet to VNet1:** User1 and User3 only. 2) **Assign a user the Reader role to VNet1:** User1 only. **Explanation:** * **Add a subnet to VNet1:** The Owner role (User1) has full permissions and can manage all aspects of the virtual network, including adding subnets. The Network Contributor role (User3) specifically allows management of networks, which includes creating subnets. * **Assign a user the Reader role to VNet1:** Only the Owner (User1) can assign roles to other users within the subscription. Other roles, such as Network Contributor, do not have this permission. **Why other options are incorrect:** User2 (Security Admin) does not have the permissions to manage virtual networks or assign roles. Their permissions are limited to security-related tasks within Azure Security Center. The discussion highlights this limitation, with several users correctly pointing out that Security Admins cannot add subnets or assign roles. There is a consensus among the users regarding this point.
439
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63005-exam-az-104-topic-5-question-80-discussion/) You have an Azure subscription that contains the virtual networks shown in the following table. (Image 1 shows a table of Virtual Networks: VNET1, VNET2, and their subnets and IP address ranges). You have the virtual machines shown in the following table. (Image 2 shows a table of Virtual Machines: VM1, VM2, their OS, and resource group). You have the virtual network interfaces shown in the following table. (Image 3 shows a table of Virtual Network Interfaces: NIC1, NIC2, their IP configurations, and subnet). Server1 is a DNS server that contains the resources shown in the following table. (Image 4 shows a table indicating Server1's IP address and its role as a DNS server). You have an Azure private DNS zone named contoso.com that has a virtual network link to VNET2 and the records shown in the following table. (Image 5 shows a table of DNS records for contoso.com, including a record for www). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (Image 6 shows a hot area with three statements): 1. VM1 can resolve the name www.contoso.com. 2. VM2 can resolve the name www.contoso.com. 3. VM1 can resolve the name server1.contoso.com. **
** No / Yes / No * **Statement 1 (VM1 can resolve www.contoso.com): No.** VM1 is connected to VNET1, which does not have a virtual network link to the contoso.com private DNS zone. Therefore, it cannot resolve the name. * **Statement 2 (VM2 can resolve www.contoso.com): Yes.** VM2 is connected to VNET2, which *does* have a virtual network link to the contoso.com private DNS zone. The DNS record for www.contoso.com exists within that zone, so VM2 can resolve it. * **Statement 3 (VM1 can resolve server1.contoso.com): No.** Even if server1.contoso.com were a valid DNS record (which is not explicitly shown to exist in the provided information), VM1 still lacks the necessary network connection (VNET link) to the private DNS zone to resolve it. **Why other options are incorrect:** The suggested answer in the original discussion is disputed. The user "speed2fast" suggests the correct answer is No/Yes/No, which aligns with the analysis above. The original suggested answer is not provided, but is assumed to be different based on the user's comment. Without the original suggested answer, a more detailed explanation of why it's incorrect cannot be provided. However, the reasoning above demonstrates why No/Yes/No is the correct answer based on the provided information and Azure DNS functionality.
440
[View Question](https://www.examtopics.com/discussions/databricks/view/63010-exam-az-104-topic-5-question-82-discussion/) You have the Azure virtual machines shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0047700001.png) VNET1 is linked to a private DNS zone named contoso.com that contains the records shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0047700002.png) You need to ping VM2 from VM1. Which DNS names can you use to ping VM2? A. comp2.contoso.com and comp4.contoso.com only B. comp1.contoso.com, comp2.contoso.com, comp3.contoso.com, and comp4.contoso.com C. comp2.contoso.com only D. comp1.contoso.com and comp2.contoso.com only E. comp1.contoso.com, comp2.contoso.com, and comp4.contoso.com only
C. comp2.contoso.com only Explanation: The provided images show that only `comp2.contoso.com` has an A record that maps the hostname to the IP address of VM2 (10.0.0.10). To ping VM2 from VM1 using DNS, you need the correct hostname that resolves to VM2's IP address. The other options include hostnames that either don't exist or point to other VMs. Therefore, only `comp2.contoso.com` can successfully resolve to VM2's IP and enable a ping. Why other options are incorrect: * **A, B, D, E:** These options include hostnames that do not have A records mapping them to the IP address of VM2, based on the provided DNS zone information. Therefore, they would not resolve correctly and the ping would fail. Note: The discussion section unanimously supports option C as the correct answer.
441
[View Question](https://www.examtopics.com/discussions/databricks/view/63014-exam-az-104-topic-5-question-83-discussion/) You have a network security group (NSG) named NSG1 that has the rules defined in the exhibit. (Click the Exhibit tab.) NSG1 is associated to a subnet named Subnet1. Subnet1 contains the virtual machines shown in the following table. You need to add a rule to NSG1 to ensure that VM1 can ping VM2. The solution must use the principle of least privilege. How should you configure the rule? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
The provided text does not give the specifics of the NSG rules shown in the images. Therefore, a precise answer detailing the exact rule configuration cannot be given. However, the core concept is to apply the principle of least privilege. This means granting only the necessary permissions to achieve the objective. In this case, VM1 needs to ping VM2. The solution would involve creating a new rule in NSG1 that specifically allows ICMP (ping) traffic from VM1's private IP address to VM2's private IP address. Any existing rules denying this traffic would need to be either removed or adjusted (perhaps adjusting the priority of the new rule). Because the suggested answer is noted as wrong by users, and the exact image content isn't available, a completely accurate answer cannot be provided. Why other options are incorrect: Without the specifics of the existing NSG rules and the options provided in the "Hot Area" image, it's impossible to definitively say why other options are incorrect. However, any option violating the principle of least privilege (e.g., opening ports unnecessarily broadly) would be incorrect. Options allowing all ICMP traffic from any source would also be incorrect as it fails the principle of least privilege. Note: The discussion highlights a disagreement with the "Suggested Answer," indicating it's likely incorrect. The provided images are unavailable for analysis, making a fully accurate answer impossible. The answer above explains the correct approach based on the provided textual information.
442
[View Question](https://www.examtopics.com/discussions/databricks/view/63064-exam-az-104-topic-5-question-24-discussion/) You have a public load balancer that balances ports 80 and 443 across three virtual machines named VM1, VM2, and VM3. You need to direct all the Remote Desktop Protocol (RDP) connections to VM3 only. What should you configure? A. an inbound NAT rule B. a new public load balancer for VM3 C. a frontend IP configuration D. a load balancing rule
A. an inbound NAT rule An inbound NAT rule allows you to forward incoming traffic on a specific port to a single virtual machine. In this scenario, you want all RDP (port 3389) traffic to go only to VM3, regardless of the existing load balancer handling ports 80 and 443. An inbound NAT rule achieves this direct forwarding. Why other options are incorrect: * **B. a new public load balancer for VM3:** This is unnecessary and inefficient. You already have a public load balancer; creating another just for RDP traffic is redundant. * **C. a frontend IP configuration:** This configures the public IP address associated with the load balancer, not the routing of traffic to a specific backend VM. * **D. a load balancing rule:** Load balancing rules distribute traffic across multiple VMs. The requirement is to direct *all* RDP traffic to a *single* VM, which is the opposite of load balancing. There is some disagreement in the discussion regarding the suitability of a load balancing rule. However, the consensus and the most widely upvoted answer support the use of an inbound NAT rule for this specific scenario.
443
[View Question](https://www.examtopics.com/discussions/databricks/view/63074-exam-az-104-topic-2-question-43-discussion/) You have a registered DNS domain named contoso.com. You create a public Azure DNS zone named contoso.com. You need to ensure that records created in the contoso.com zone are resolvable from the internet. What should you do? A. Create NS records in contoso.com. B. Modify the SOA record in the DNS domain registrar. C. Create the SOA record in contoso.com. D. Modify the NS records in the DNS domain registrar.
The correct answer is **D. Modify the NS records in the DNS domain registrar.** The Name Servers (NS) records at the domain registrar (e.g., GoDaddy, Namecheap) control where DNS queries for that domain are directed. To make Azure DNS authoritative for `contoso.com`, you must change the NS records at the registrar to point to the Azure DNS nameservers provided by Azure. This directs all internet queries for `contoso.com` to the Azure DNS servers, ensuring records within your Azure DNS zone are resolvable. Why other options are incorrect: * **A. Create NS records in contoso.com:** Creating NS records *within* the Azure DNS zone for `contoso.com` doesn't change where the internet looks for the authoritative DNS information for that domain. The registrar still points to its own servers. * **B. Modify the SOA record in the DNS domain registrar:** The Start of Authority (SOA) record contains information about the zone itself (e.g., primary nameserver, email contact), but it doesn't control *where* the DNS queries are routed. Modifying it at the registrar wouldn't make Azure DNS authoritative. * **C. Create the SOA record in contoso.com:** Similar to A, creating the SOA record within the Azure zone doesn't change where the internet looks for authoritative DNS. The registrar will have its own SOA record. The discussion shows overwhelming agreement on answer D.
444
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63084-exam-az-104-topic-5-question-78-discussion/) You have an app named App1 that is installed on two Azure virtual machines named VM1 and VM2. Connections to App1 are managed by using an Azure Load Balancer. The effective network security configurations for VM2 are shown in the following exhibit. You discover that connections to App1 from 131.107.100.50 over TCP port 443 fail. You verify that the Load Balancer rules are configured correctly. You need to ensure that connections to App1 can be established successfully from 131.107.100.50 over TCP port 443. Solution: You create an inbound security rule that denies all traffic from the 131.107.100.50 source and has a priority of 64999. Does this meet the goal? A. Yes B. No **
** B. No The proposed solution of creating an inbound security rule that denies all traffic from 131.107.100.50 will *not* solve the problem. The issue is not that traffic from 131.107.100.50 is being *allowed* but rather that the Azure Load Balancer's health probes (sent from the AzureLoadBalancer IP address, typically 168.63.129.16) are being blocked by a higher priority rule (Rule 200 in the exhibit), preventing the load balancer from directing traffic to the healthy VMs. Blocking traffic from 131.107.100.50 further restricts connectivity but does not address the root cause of the failure. To resolve the problem, either Rule 200 needs to be modified to allow the health probes, or a new rule with higher priority allowing those probes needs to be added. **WHY OTHER OPTIONS ARE INCORRECT:** A. Yes is incorrect because the proposed solution directly prevents connections from 131.107.100.50, the source IP in question. This further exacerbates the problem instead of fixing it. The core issue is the blockage of the load balancer's health probes, not the client's connection attempt. **NOTE:** The discussion shows agreement that the solution is incorrect. The explanation provided in the discussion clarifies the underlying problem of the Azure Load Balancer health probes being blocked.
445
[View Question](https://www.examtopics.com/discussions/databricks/view/63106-exam-az-104-topic-4-question-4-discussion/) You have an Azure virtual machine named VM1 that runs Windows Server 2016. You need to create an alert in Azure when more than two error events are logged to the System event log on VM1 within an hour. Solution: You create an Azure storage account and configure shared access signatures (SASs). You install the Microsoft Monitoring Agent on VM1. You create an alert in Azure Monitor and specify the storage account as the source. Does that meet the goal? A. Yes B. No
B. No The proposed solution uses an Azure storage account as the data source for the Azure Monitor alert. This is incorrect. To monitor the System event log on VM1 for error events and trigger an alert based on a threshold (more than two events within an hour), you need to use Azure Log Analytics. Log Analytics integrates with Azure Monitor to provide the necessary event log monitoring and alerting capabilities. The solution should involve configuring a Log Analytics workspace, installing the Microsoft Monitoring Agent on VM1 to collect the logs, and then creating the alert in Azure Monitor, specifying the Log Analytics workspace as the data source. The provided solution using a storage account is not appropriate for real-time event log monitoring and alerting. Other Options: Option A is incorrect because the proposed solution does not correctly use Azure services to meet the stated goal. It leverages a storage account which is not designed for real-time event log monitoring and alerting in the way described. The discussion highlights that a Log Analytics workspace is the correct data source to achieve this. There is a consensus among the users that option B is correct.
446
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63152-exam-az-104-topic-5-question-81-discussion/) You have a virtual network named VNet1 as shown in the exhibit. (Click the Exhibit tab.) No devices are connected to VNet1. You plan to peer VNet1 to another virtual network named VNet2. VNet2 has an address space of 10.2.0.0/16. You need to create the peering. What should you do first? **
** A. Modify the address space of VNet1. Virtual network peering requires that the address spaces of the peered networks do not overlap. Since VNet2 has an address space of 10.2.0.0/16, and the image (not included in this text-based format) likely shows VNet1 having an overlapping address space, the first step is to modify VNet1's address space to avoid conflict. Only then can the peering be successfully created. **Why other options are incorrect:** * **B. Add a gateway subnet to VNet1:** Gateway subnets are used for VPN gateways or ExpressRoute gateways, which are not directly required for virtual network peering. * **C. Create a subnet on VNet1 and VNet2:** Subnets already exist in both VNet1 and VNet2; creating additional subnets is not necessary for peering. * **D. Configure a service endpoint on VNet2:** Service endpoints are used to access Azure services from a virtual network; they're unrelated to virtual network peering. **Note:** The provided discussion shows unanimous agreement on the correct answer.
447
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63162-exam-az-104-topic-5-question-65-discussion/) You have an Azure subscription that contains the public load balancers shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0045000001.png) You plan to create six virtual machines and to load balance requests to the virtual machines. Each load balancer will load balance three virtual machines. You need to create the virtual machines for the planned solution. How should you create the virtual machines? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0045100001.jpg) Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0045100002.jpg) **
** The virtual machines should be created as follows: * **For Load Balancer 1 (Basic):** The three virtual machines should be created within the same Availability Set or Virtual Machine Scale Set. Basic load balancers are restricted to a single availability set, virtual machine scale set, or a single machine. * **For Load Balancer 2 (Standard):** The three virtual machines can be created in any combination of Availability Sets, Virtual Machine Scale Sets, or individual VMs, as long as they are all connected to the same virtual network. Standard load balancers can span multiple availability sets and scale sets within a single virtual network. **Explanation:** The key differentiator lies in the SKU of the load balancers. Basic load balancers have limitations on where their backend pools can reside, requiring all VMs to be in the same Availability Set or VM Scale Set. Standard load balancers offer greater flexibility, allowing VMs from various Availability Sets and Scale Sets to be included in the backend pool, as long as they share the same virtual network. The provided images show the correct answer choices reflecting these limitations and capabilities. **Why other options are incorrect:** The question implies that there are other possible answer options in the hot area image that are not shown in the provided text. However, based on the information available, any option that doesn't adhere to the Basic/Standard Load Balancer limitations described above would be incorrect. For example, placing VMs for the Basic Load Balancer across multiple Availability Sets would be incorrect. Similarly, placing VMs for the Standard Load Balancer across different virtual networks would also be incorrect. **Note on Disagreement:** The discussion section shows some disagreement regarding the phrasing of the options presented in the exam. Some users point out inconsistencies between the available options and the documented capabilities of Azure load balancers. However, the suggested answer, supported by the provided images and the general understanding of Azure load balancer SKUs, remains consistent.
448
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63164-exam-az-104-topic-4-question-48-discussion/) You have an Azure subscription named Subscription1. Subscription1 contains a virtual machine named VM1. You install and configure a web server and a DNS server on VM1. VM1 has the effective network security rules shown in the following exhibit: [Image of Network Security Rules showing Rule 1: Priority 100, Action Allow, and Rule 2: Priority 200, Action Deny] (Note: The image itself is not provided here, but the description is included). The image shows that Rule 1 allows traffic from ports 50-500 and Rule 2 denies traffic from ports 50-60. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. **Box 1:** Rule2 ____ ports 50-60, which includes port 53, the DNS port. Internet users ____ reach the Web server, since it uses port 80. **Box 2:** If Rule2 is removed internet users ____ reach the DNS server as well. **
** **Box 1:** blocks; can **Box 2:** can **Explanation:** Rule 2, with a higher priority number (200), denies traffic on ports 50-60. Since port 53 (DNS) falls within this range, DNS traffic is blocked. However, the web server uses port 80, which is allowed by Rule 1 (ports 50-500). Therefore, internet users *can* reach the web server. If Rule 2 is removed, the only rule remaining is Rule 1, which allows traffic from ports 50-500. This includes both port 80 (web server) and port 53 (DNS server), allowing internet users to reach both. **Why other options are incorrect:** The question and provided answer clearly state the functionality based on the priority and port ranges of the network security rules. Any other answer would contradict the provided information on how Network Security Groups (NSGs) process rules based on priority. **Note:** The discussion section shows a strong consensus on the correct answer. However, the absence of explicit alternative answer choices in the original question makes it impossible to detail why they are wrong.
449
[View Question](https://www.examtopics.com/discussions/databricks/view/63169-exam-az-104-topic-5-question-76-discussion/) DRAG DROP - You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0046800001.png) You need to load balance HTTPS connections to vm1 and vm2 by using lb1. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0046900001.jpg)
The correct sequence of actions is: 1. **Remove the Public IP addresses:** The provided image shows Basic Public IP addresses associated with VM1 and VM2. Standard Load Balancers, like lb1, are not compatible with Basic Public IPs. Therefore, these need to be removed and replaced with Standard Public IPs. 2. **Create a backend pool and health probes:** A backend pool defines the VMs (vm1 and vm2) that the load balancer will distribute traffic to. Health probes are crucial; they monitor the health of the VMs and ensure that the load balancer only directs traffic to healthy instances. 3. **Create a load balancer rule:** This rule defines how the load balancer will route incoming HTTPS traffic to the backend pool. It specifies the frontend IP configuration (associated with the Standard Load Balancer's public IP), the backend pool, and the port for HTTPS (443). The discussion shows some disagreement on the exact steps, with some users suggesting removing NSG1 as well. However, the primary steps above are the core actions required to load balance HTTPS traffic using a standard load balancer and are consistent with the suggested answer. Removing NSG1 might be a subsequent step depending on the overall network configuration, but it's not directly required for load balancing.
450
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63183-exam-az-104-topic-5-question-77-discussion/) You manage a virtual network named VNet1 that is hosted in the West US Azure region. VNet1 hosts two virtual machines named VM1 and VM2 that run Windows Server. You need to inspect all the network traffic from VM1 to VM2 for a period of three hours. Solution: From Azure Monitor, you create a metric on Network In and Network Out. Does this meet the goal? A. Yes B. No **
** B. No **Explanation:** Azure Monitor provides metrics on network traffic, such as "Network In" and "Network Out," but this only shows aggregate data (throughput, etc.). It does *not* allow for inspection of the actual network packets themselves. To inspect the traffic as required, you need a tool capable of packet capture, such as Azure Network Watcher's Packet Capture. The suggested solution of using Azure Monitor metrics does not provide the required level of detail for inspecting individual packets. **Why other options are incorrect:** * **A. Yes:** This is incorrect because Azure Monitor does not provide packet inspection capabilities. It only shows high-level metrics. The question requires detailed inspection of network traffic, not just aggregate metrics. The discussion shows some disagreement on the correct answer. While the overwhelming consensus points to "No", the existence of some comments suggesting otherwise is noted.
451
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63206-exam-az-104-topic-2-question-46-discussion/) HOTSPOT - You have the Azure resources shown on the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0010000001.jpg) You plan to track resource usage and prevent the deletion of resources. To which resources can you apply locks and tags? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0010100001.png) **
** Sub1, RG1, and VM1 only. **Explanation:** Locks and tags in Azure can be applied to subscriptions, resource groups, and individual resources. The image shows Sub1 (Subscription), RG1 (Resource Group), and VM1 (Virtual Machine – a resource). Therefore, locks and tags can be applied to these three. The discussion confirms this as the correct answer, with multiple users supporting it and citing the relevant Microsoft documentation. **Why other options are incorrect:** The discussion clarifies that while a tenant root group exists, it's not a subscription, and the question specifically asks about resources within a given subscription. Therefore, options including a tenant root group or other entities beyond the scope of the presented resources are incorrect based on the context of the question. There is some disagreement in the discussion regarding the tenant parent group; however, the consensus supported by the majority of users and references to official Microsoft documentation points to the selection of Sub1, RG1, and VM1.
452
[View Question](https://www.examtopics.com/discussions/databricks/view/63246-exam-az-104-topic-13-question-1-discussion/) You need to ensure that VM1 can communicate with VM4. The solution must minimize the administrative effort. What should you do? A. Create an NSG and associate the NSG to VM1 and VM4. B. Establish peering between VNET1 and VNET3. C. Assign VM4 an IP address of 10.0.1.5/24. D. Create a user-defined route from VNET1 to VNET3.
B. Establish peering between VNET1 and VNET3. The suggested answer is B because establishing peering between the virtual networks (VNET1 and VNET3) allows communication between VMs in different VNETs without requiring complex routing configurations or changes to network security groups (NSGs). This minimizes administrative overhead as it's a relatively simple configuration change. Why other options are incorrect: * **A. Create an NSG and associate the NSG to VM1 and VM4:** While NSGs can control network traffic, this option doesn't address the fundamental issue of VMs residing in different virtual networks. Additional configuration is still needed to allow communication between the VNETs. * **C. Assign VM4 an IP address of 10.0.1.5/24:** This only changes VM4's IP address, which won't enable communication if VM4 is in a different VNET from VM1 without proper routing. The IP address assignment doesn't address the network segmentation. * **D. Create a user-defined route from VNET1 to VNET3:** This requires more in-depth configuration than VNET peering. Peering is a simpler, more direct solution for inter-VNET communication. Note: The discussion highlights that the question is incomplete and lacks contextual information about VNET3 and the 10.0.1.x/24 IP address range. The accuracy of the suggested answer depends on the missing context, which is mentioned in several comments. Several users mention that the question is likely misplaced or connected to a different case study with additional details. Therefore, the correct answer might not be applicable to the presented context alone.
453
[View Question](https://www.examtopics.com/discussions/databricks/view/63248-exam-az-104-topic-4-question-1-discussion/) You deploy an Azure Kubernetes Service (AKS) cluster named AKS1. You need to deploy a YAML file to AKS1. Solution: From Azure CLI, you run `az aks`. Does this meet the goal? A. Yes B. No
B. No The command `az aks` is used to manage Azure Kubernetes Service (AKS) clusters themselves. It's for creating, updating, or deleting AKS clusters, not for deploying applications *within* a cluster. To deploy a YAML file to an AKS cluster, you need to use the `kubectl` command-line tool, specifically `kubectl apply -f .yaml`. Why other options are incorrect: A. "Yes" is incorrect because `az aks` does not deploy YAML files to an AKS cluster. It manages the cluster itself, not the deployments within it. Note: The discussion shows a consensus that option B is correct.
454
[View Question](https://www.examtopics.com/discussions/databricks/view/63251-exam-az-104-topic-4-question-2-discussion/) You deploy an Azure Kubernetes Service (AKS) cluster named AKS1. You need to deploy a YAML file to AKS1. Solution: From Azure CLI, you run the kubectl client. Does this meet the goal? A. Yes B. No
A. Yes Explanation: The `kubectl` client is the standard command-line tool for interacting with Kubernetes clusters, including deploying YAML files. Running `kubectl apply -f ` from the Azure CLI (which provides access to the AKS cluster) will successfully deploy the YAML file to AKS1. Therefore, the solution meets the stated goal. Why other options are incorrect: B. No - This is incorrect because using `kubectl` from the Azure CLI is a valid and common method for deploying YAML files to an AKS cluster. Note: The provided discussion shows unanimous agreement on the answer being "Yes".
455
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63255-exam-az-104-topic-3-question-23-discussion/) You have an Azure subscription. You create the Azure Storage account shown in the following exhibit. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0018300001.jpg) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0018400001.png) Suggested Answer: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0018400002.png) **
** Box 1: **3** - Locally Redundant Storage (LRS) maintains three copies of your data within a single data center (or sub-region). This ensures high availability and durability by protecting against common failures such as disk, node, or rack failures. Box 2: **Cool** - The "Cool" access tier is optimized for infrequently accessed data stored for at least 30 days. This is a cost-effective option compared to the "Hot" tier if data access is infrequent. **Explanation:** The provided suggested answer and discussion overwhelmingly support the choices of "3" for Box 1 and "Cool" for Box 2. The explanation for Box 1 directly references the replication within LRS, while the explanation for Box 2 accurately describes the purpose and characteristics of the "Cool" storage tier within Azure Blob Storage. **Why other options are incorrect:** The question is a multiple choice within a specific context. Without knowing the specific options originally presented in the drop downs for Box 1 and Box 2, it is impossible to definitively say why other options would be incorrect. However, given the provided answer and the consensus among the discussants, any other option for Box 1 would not reflect the inherent redundancy of LRS, and any other option for Box 2 would not accurately describe the access patterns and cost benefits of the "Cool" storage tier.
456
[View Question](https://www.examtopics.com/discussions/databricks/view/63295-exam-az-104-topic-4-question-61-discussion/) You plan to back up an Azure virtual machine named VM1. You discover that the Backup Pre-Check status displays a status of Warning. What is a possible cause of the Warning status? A. VM1 is stopped. B. VM1 does not have the latest version of the Azure VM Agent (WaAppAgent.exe) installed. C. VM1 has an unmanaged disk. D. A Recovery Services vault is unavailable.
B. VM1 does not have the latest version of the Azure VM Agent (WaAppAgent.exe) installed. The discussion explicitly states that a warning in the Backup Pre-Check status can be caused by not having the latest version of the Azure VM Agent installed. The provided Microsoft documentation link (although not directly examined here for verification) supports this claim. Why other options are incorrect: * **A. VM1 is stopped:** While a stopped VM might prevent a backup, the question states that a *warning* is present, implying the VM is at least partially functional. A stopped VM would likely present a more severe error, not just a warning. * **C. VM1 has an unmanaged disk:** Unmanaged disks might present challenges during backup, but this is not explicitly stated as a cause of a "Warning" status in the backup pre-check. * **D. A Recovery Services vault is unavailable:** An unavailable vault would prevent the backup entirely, rather than generating a warning. Note: The consensus in the discussion is that option B is the correct answer.
457
[View Question](https://www.examtopics.com/discussions/databricks/view/63309-exam-az-104-topic-4-question-62-discussion/) You have an Azure virtual machine named VM1. VM1 was deployed by using a custom Azure Resource Manager template named ARM1.json. You receive a notification that VM1 will be affected by maintenance. You need to move VM1 to a different host immediately. Solution: From the Overview blade, you move the virtual machine to a different resource group. Does this meet the goal?
B. No. Moving a virtual machine to a different resource group does *not* move it to a different host. The host refers to the underlying physical server in the Azure data center. Changing the resource group only changes the organizational structure within Azure, not the physical location of the VM. To move the VM to a different host to mitigate maintenance, redeploying the VM is the correct action. The discussion shows significant disagreement on the correct answer. While the suggested answer is B, several comments correctly identify redeploying the VM as the solution. The disagreement highlights the ambiguity in the question regarding the definition of "host" in this context and the various interpretations of the required solution.
458
[View Question](https://www.examtopics.com/discussions/databricks/view/63314-exam-az-104-topic-3-question-46-discussion/) You have an Azure subscription. In the Azure portal, you plan to create a storage account named storage1 that will have the following settings: ✑ Performance: Standard ✑ Replication: Zone-redundant storage (ZRS) ✑ Access tier (default): Cool ✑ Hierarchical namespace: Disabled You need to ensure that you can set Account kind for storage1 to BlockBlobStorage. Which setting should you modify first? A. Performance B. Replication C. Access tier (default) D. Hierarchical namespace
A. Performance Explanation: To set the Account kind to BlockBlobStorage, you must first select "Premium" performance. BlockBlobStorage is only available with Premium storage accounts, as noted in the provided discussion referencing Microsoft documentation. Changing the performance setting to Premium allows you to then select BlockBlobStorage as the account kind. The other options (Replication, Access Tier, and Hierarchical Namespace) are not directly related to the availability of BlockBlobStorage as an account kind. Why other options are incorrect: * **B. Replication:** Replication type (ZRS in this case) is independent of the account kind and performance tier. * **C. Access tier (default):** The access tier affects storage costs and performance characteristics but doesn't determine the available account kinds. * **D. Hierarchical namespace:** This is a feature enabling hierarchical organization within the storage account, unrelated to the account kind. Note: The discussion shows some minor disagreement on the exact wording of the relationship between Premium performance and BlockBlobStorage. However, the consensus is that Premium performance is a prerequisite for selecting BlockBlobStorage as the account kind.
459
[View Question](https://www.examtopics.com/discussions/databricks/view/63320-exam-az-104-topic-3-question-24-discussion/) You have an Azure Storage account named storage1. You plan to use AzCopy to copy data to storage1. You need to identify the storage services in storage1 to which you can copy the data. Which storage services should you identify? A. blob, file, table, and queue B. blob and file only C. file and table only D. file only E. blob, table, and queue only
The correct answer is **B. blob and file only**. While there is some disagreement in the discussion regarding AzCopy's capabilities with Azure Table storage across different versions, the most recent and widely accepted consensus points to blobs and files as the only storage services directly supported by AzCopy for data copying. Older versions may have had experimental or limited support for tables, but this is not consistently reliable. Why other options are incorrect: * **A. blob, file, table, and queue:** Queues and tables are not directly supported by AzCopy for data copying. * **C. file and table only:** While some older AzCopy versions might have briefly supported tables, current versions do not. * **D. file only:** This is incomplete; AzCopy also supports blob storage. * **E. blob, table, and queue only:** Queues are not supported, and table support is unreliable and likely deprecated in current AzCopy versions. Note: The discussion reveals conflicting information about AzCopy's support for Azure Table storage across its various versions. The answer reflects the current, most widely accepted understanding, acknowledging that this might have changed over time.
460
[View Question](https://www.examtopics.com/discussions/databricks/view/63324-exam-az-104-topic-3-question-37-discussion/) You create an Azure Storage account named contosostorage. You plan to create a file share named data. Users need to map a drive to the data file share from home computers that run Windows 10. Which outbound port should you open between the home computers and the data file share? A. 80 B. 443 C. 445 D. 3389
C. 445 The correct answer is 445 because this port is used by the Server Message Block (SMB) protocol, which is the standard protocol used by Windows for file sharing. To map a network drive to an Azure file share from a Windows 10 machine, the SMB protocol must be able to communicate between the client and the Azure file share. Why other options are incorrect: * **A. 80:** This port is used for HTTP (Hypertext Transfer Protocol), the protocol for web browsing. * **B. 443:** This port is used for HTTPS (Hypertext Transfer Protocol Secure), the secure version of HTTP. * **D. 3389:** This port is used for RDP (Remote Desktop Protocol), which is used for remote desktop connections. Note: While the consensus in the provided discussion points to 445 as the correct answer, it is important to be aware that some ISPs may block this port. Workarounds like using a VPN or Azure ExpressRoute might be necessary in such scenarios.
461
[View Question](https://www.examtopics.com/discussions/databricks/view/63382-exam-az-104-topic-5-question-84-discussion/) You have a computer named Computer1 that has a point-to-site VPN connection to an Azure virtual network named VNet1. The point-to-site connection uses a self-signed certificate. From Azure, you download and install the VPN client configuration package on a computer named Computer2. You need to ensure that you can establish a point-to-site VPN connection to VNet1 from Computer2. Solution: On Computer2, you set the Startup type for the IPSec Policy Agent service to Automatic. Does this meet the goal? A. Yes B. No
B. No. This solution does not meet the goal. The IPSec Policy Agent service being set to Automatic is insufficient to establish a point-to-site VPN connection. The core issue is that the self-signed certificate needs to be *installed* on Computer2. The VPN client configuration package may contain other necessary files, but the certificate itself is crucial for authentication and establishing the connection. Simply setting the service startup type does not address the missing certificate. Why other options are incorrect: A. Yes - This is incorrect because setting the IPSec Policy Agent service startup type to Automatic does not solve the underlying problem of the missing client certificate on Computer2. Note: There is unanimous agreement among the discussion participants that option B is correct.
462
[View Question](https://www.examtopics.com/discussions/databricks/view/63489-exam-az-104-topic-5-question-22-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0037900001.png) The Not allowed resource types Azure policy that has policy enforcement enabled is assigned to RG1 and uses the following parameters: Microsoft.Network/virtualNetworks Microsoft.Compute/virtualMachines In RG1, you need to create a new virtual machine named VM2, and then connect VM2 to VNET1. What should you do first? A. Remove Microsoft.Compute/virtualMachines from the policy. B. Create an Azure Resource Manager template C. Add a subnet to VNET1. D. Remove Microsoft.Network/virtualNetworks from the policy.
A. Remove `Microsoft.Compute/virtualMachines` from the policy. The question states that a policy is in place that prevents the creation of virtual machines and virtual networks within Resource Group RG1. Since you need to create a new virtual machine (VM2), the first step is to remove the `Microsoft.Compute/virtualMachines` parameter from the policy, which currently blocks the creation of new virtual machines. VNET1 already exists, so removing the `Microsoft.Network/virtualNetworks` parameter is unnecessary. Creating a Resource Manager template (B) or adding a subnet (C) are irrelevant until the policy preventing VM creation is addressed. Why other options are incorrect: * **B. Create an Azure Resource Manager template:** While ARM templates are useful for creating resources, they won't resolve the policy blocking VM creation. * **C. Add a subnet to VNET1:** Adding a subnet is only necessary *after* the VM is created and is not the first step. The existing VNET1 already appears to have subnets. * **D. Remove `Microsoft.Network/virtualNetworks` from the policy:** This is incorrect because VNET1 already exists; the policy is blocking the creation of *new* virtual networks, not the use of existing ones. The problem is the creation of the new VM, not the network itself. Note: The discussion shows overwhelming agreement on answer A.
463
[View Question](https://www.examtopics.com/discussions/databricks/view/63503-exam-az-104-topic-11-question-2-discussion/) HOTSPOT - You need to create storage5. The solution must support the planned changes. Which type of storage account should you use, and which account should you configure as the destination storage account? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0022800001.jpg)
The provided text does not give the content of the images showing the question's "Hot Area" and "Suggested Answer." Therefore, a definitive answer cannot be provided. To answer this question, the content of the images (specifically showing the options for storage account type and destination storage account) is needed. The discussion section only contains student comments expressing anxiety and loneliness about the exam, irrelevant to the question itself.
464
**** [View Question](https://www.examtopics.com/discussions/databricks/view/63586-exam-az-104-topic-5-question-79-discussion/) DRAG DROP - You have an Azure subscription that contains two on-premises locations named site1 and site2. You need to connect site1 and site2 by using an Azure Virtual WAN. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0047300001.jpg) *(Image contains a drag-and-drop interface with actions, but the text of the actions is not visible in the provided context)* **
** The correct sequence of actions to connect site1 and site2 using Azure Virtual WAN is: 1. Create Azure Virtual WAN 2. Create Virtual Hub 3. Create VPN sites (representing site1 and site2) 4. Connect VPN sites to the virtual hub This sequence reflects the logical order of establishing a Virtual WAN connection. First, the overarching Virtual WAN resource needs to be created. Then, a Virtual Hub, acting as a central point of connectivity within the WAN, is created. Next, VPN sites representing your on-premises locations (site1 and site2) are created. Finally, these VPN sites are connected to the Virtual Hub to establish the connection between your on-premises locations via the Azure Virtual WAN. **Why other options are incorrect:** The provided text does not list any other options; therefore, this answer is based on the most logical and commonly accepted sequence for setting up a site-to-site VPN connection using Azure Virtual WAN. The image of the drag-and-drop interface is unavailable, preventing a complete analysis of all possible options presented within the original question. The discussion section supports this answer as the correct sequence for the exam. **Note:** While the provided discussion section indicates that the suggested answer is correct for the exam AZ-104, there's no way to verify definitively other possible options given the limited information about the drag-and-drop interface's contents.
465
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67424-exam-az-104-topic-2-question-48-discussion/) You have an Azure subscription named Sub1 that contains the Azure resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0010400001.png) You assign an Azure policy that has the following settings: ✑ Scope: Sub1 ✑ Exclusions: Sub1/RG1/VNET1 ✑ Policy definition: Append a tag and its value to resources ✑ Policy enforcement: Enabled ✑ Tag name: Tag4 ✑ Tag value: value4 You assign tags to the resources as shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0010400008.png) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0010500001.png) **
** The correct answer is No, No, No. * **Box 1 (RG1): No.** The policy appends Tag4:value4. While the policy *could* potentially add Tag4 to RG1, the provided text and images do not explicitly state whether the policy was applied *before* or *after* Tag2:IT was manually added to RG1. The discussion highlights a disagreement: some believe the policy would append Tag4, resulting in two tags (Tag2:IT and Tag4:value4), while others suggest the policy wouldn't affect existing tags. Therefore, definitively stating that only Tag2:IT exists is incorrect without clarification on the timing of the policy application. * **Box 2 (Storage1): No.** Tags are not inherited in Azure (unless specifically configured with additional policies). Therefore, Storage1 will have its existing tags (Tag3:value1) and the policy will add Tag4:value4. It will *not* inherit Tag2:IT from RG1. * **Box 3 (VNET1): No.** VNET1 is explicitly excluded from the policy's scope (Sub1/RG1/VNET1). Therefore, the policy will not add Tag4:value4 to VNET1. It will retain only its existing Tag3:value2. **Why other options are incorrect:** The suggested answer incorrectly assumes the policy would override existing tags or that tags are inherited. The discussion shows differing interpretations of the policy's behavior regarding existing tags and the timing of its application. The definitive statement of only one tag existing on RG1 is questionable and unsupported by the provided information.
466
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67426-exam-az-104-topic-2-question-49-discussion/) You need to ensure that an Azure Active Directory (Azure AD) user named Admin1 is assigned the required role to enable Traffic Analytics for an Azure subscription. Solution: You assign the Traffic Manager Contributor role at the subscription level to Admin1. Does this meet the goal? A. Yes B. No **
** B. No The suggested answer is B, No. The discussion shows conflicting information regarding the required roles. While initially, some users suggested that the Traffic Manager Contributor role or even the Reader role would suffice, later posts cite the official Microsoft documentation (which has apparently been corrected since the initial posts) stating that only the Owner, Contributor, or Network Contributor roles at the subscription level are sufficient to enable Traffic Analytics. Assigning the Traffic Manager Contributor role does not meet this requirement. **Why other options are incorrect:** A. Yes - is incorrect because the correct roles to enable Traffic Analytics are Owner, Contributor, or Network Contributor, not Traffic Manager Contributor. The conflicting information in the discussion highlights the uncertainty surrounding the role requirements; however, the most up-to-date and reliable source points to the roles mentioned above. **Note:** There is significant disagreement and conflicting information within the provided discussion regarding which Azure roles are required to enable Traffic Analytics. The answer reflects the most up-to-date, officially supported documentation linked within the discussion.
467
[View Question](https://www.examtopics.com/discussions/databricks/view/67428-exam-az-104-topic-2-question-50-discussion/) You have three offices and an Azure subscription that contains an Azure Active Directory (Azure AD) tenant. You need to grant user management permissions to a local administrator in each office. What should you use? A. Azure AD roles B. administrative units C. access packages in Azure AD entitlement management D. Azure roles
B. administrative units Explanation: Administrative units allow you to delegate user management permissions to specific organizational units, such as regional offices. This directly addresses the requirement of granting user management permissions to local administrators in each of the three offices, allowing each administrator to manage users only within their respective office. Other options don't provide this granular control. Why other options are incorrect: * **A. Azure AD roles:** Azure AD roles provide broad permissions across the entire Azure AD tenant. This doesn't meet the requirement of restricting permissions to specific offices. * **C. access packages in Azure AD entitlement management:** Access packages manage access to applications and resources, not user management permissions within the directory itself. * **D. Azure roles:** Azure roles manage permissions within Azure resources (like virtual machines, storage accounts), not within Azure Active Directory user management. Note: The discussion highlights a potential ambiguity in the question. Some argue that the question doesn't explicitly state that the local administrators *should only* manage users within their respective offices. While the chosen answer (administrative units) provides the most suitable solution for this *likely* intended scenario, the question's lack of explicit restriction leaves room for interpretation.
468
[View Question](https://www.examtopics.com/discussions/databricks/view/67429-exam-az-104-topic-2-question-51-discussion/) You have an Azure Directory (Azure AD) tenant named Adatum and an Azure Subscription named Subscription1. Adatum contains a group named Developers. Subscription1 contains a resource group named Dev. You need to provide the Developers group with the ability to create Azure logic apps in the Dev resource group. Solution: On Dev, you assign the Logic App Contributor role to the Developers group. Does this meet the goal? A. Yes B. No
A. Yes The Logic App Contributor role allows users to create and manage logic apps within a specified resource group. Assigning this role to the Developers group in the Dev resource group grants them the necessary permissions to create Azure logic apps within that resource group. Therefore, the solution meets the goal. Other Options: Option B is incorrect because the provided solution directly addresses the requirement. The Logic App Contributor role is specifically designed for this purpose.
469
[View Question](https://www.examtopics.com/discussions/databricks/view/67433-exam-az-104-topic-3-question-16-discussion/) You have an Azure subscription that contains a storage account named account1. You plan to upload the disk files of a virtual machine to account1 from your on-premises network. The on-premises network uses a public IP address space of 131.107.1.0/24. You plan to use the disk files to provision an Azure virtual machine named VM1. VM1 will be attached to a virtual network named VNet1. VNet1 uses an IP address space of 192.168.0.0/24. You need to configure account1 to meet the following requirements: ✑ Ensure that you can upload the disk files to account1. ✑ Ensure that you can attach the disks to VM1. ✑ Prevent all other access to account1. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. From the Networking blade of account1, select Selected networks. B. From the Networking blade of account1, select Allow trusted Microsoft services to access this storage account. C. From the Networking blade of account1, add the 131.107.1.0/24 IP address range. D. From the Networking blade of account1, add VNet1. E. From the Service endpoints blade of VNet1, add a service endpoint.
The correct answer is A and C. * **A. From the Networking blade of account1, select Selected networks:** This option allows you to specify which virtual networks can access the storage account. While not directly addressing the on-premises upload, selecting "Selected networks" and adding VNet1 (option D) *could* indirectly restrict access. However, the question emphasizes uploading from the on-premises network, making C the more direct and necessary solution. The discussion highlights that simply adding VNet1 is insufficient and that the public IP range needs to be explicitly allowed. * **C. From the Networking blade of account1, add the 131.107.1.0/24 IP address range:** This directly allows the on-premises network to upload files to the storage account. This is crucial for fulfilling the requirement of uploading disk files from the on-premises network. The discussion reveals conflicting opinions regarding option D. Some argue it's unnecessary because attaching disks to a VM doesn't use the storage account's REST endpoint and therefore network rules are irrelevant. Others suggest it's necessary to restrict access, supporting the "Prevent all other access" requirement. However, the most direct and complete solution is achieved using options A and C. Option B is incorrect because it allows access from trusted Microsoft services only, not your on-premises network or VM. Option E is incorrect as it focuses on the VNet, not the storage account's network configuration.
470
[View Question](https://www.examtopics.com/discussions/databricks/view/67446-exam-az-104-topic-3-question-38-discussion/) You have an Azure subscription named Subscription1. You have 5 TB of data that you need to transfer to Subscription1. You plan to use an Azure Import/Export job. What can you use as the destination of the imported data? A. Azure File Storage B. an Azure Cosmos DB database C. Azure Data Factory D. Azure SQL Database
The correct answer is **A. Azure File Storage**. Azure Import/Export service allows importing large amounts of data to Azure Blob storage and Azure File storage. The other options are not supported destinations for Azure Import/Export jobs. Azure Cosmos DB, Azure Data Factory, and Azure SQL Database are services that can receive data, but not directly through the Azure Import/Export service. They may accept data from a storage account, but that data would need to be transferred separately *after* the import from the physical drives to the storage account. There is a strong consensus among the discussion participants that A is the correct answer.
471
[View Question](https://www.examtopics.com/discussions/databricks/view/67527-exam-az-104-topic-5-question-87-discussion/) You have an Azure subscription. You are deploying an Azure Kubernetes Service (AKS) cluster that will contain multiple pods. The pods will use Kubernetes networking. You need to restrict network traffic between the pods. What should you configure on the AKS cluster? A. the Azure network policy B. the Calico network policy C. pod security policies D. an application security group
B. the Calico network policy Explanation: The question specifies that the pods will use Kubernetes networking. Calico network policies support both Azure CNI and Kubernetes networking (kubenet), making it the appropriate choice for restricting network traffic between pods in this scenario. Azure Network Policies only work with Azure CNI. Pod Security Policies are for controlling pod security, not network traffic. Application Security Groups operate at the Azure virtual network level, not within the Kubernetes cluster. Why other options are incorrect: * **A. the Azure network policy:** Azure network policies require Azure CNI, which is not explicitly stated or implied to be in use here. The question states "Kubernetes networking," which is more general and includes kubenet. * **C. pod security policies:** These policies deal with security contexts of pods (e.g., capabilities, security context constraints), not network traffic restrictions. * **D. an application security group:** Application Security Groups operate at the virtual network level, not within a Kubernetes cluster. They cannot directly control traffic between pods within an AKS cluster. Note: The discussion shows a consensus that Calico is the correct answer because of its compatibility with both Azure CNI and kubenet networking.
472
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67529-exam-az-104-topic-6-question-7-discussion/) You have the web apps shown in the following table. | Web App Name | Type | |-----------------|-------------------| | Web App 1 | Azure App Service | | Web App 2 | Azure VM | | Web App 3 | On-premises | You need to monitor the performance and usage of the apps by using Azure Application Insights. The solution must minimize modifications to the application code. What should you do on each app? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. (The images showing the hot spot and suggested answer are omitted here as they are not directly needed to construct the question.) **
** For each web app, install the Application Insights agent. **Explanation:** The question explicitly states the need to minimize code modifications. The Application Insights agent (formerly Status Monitor V2) provides a method for monitoring applications with minimal code changes. It's a PowerShell module that can be installed on Azure App Services, Azure VMs, and on-premises servers, covering all three web app types. Manual instrumentation using the SDK requires code changes, which is explicitly to be avoided as per the problem statement. **Why other options are incorrect:** Any solution involving manual instrumentation (SDK installation) is incorrect because the question requires minimizing code modifications. There's no indication of other viable methods in the provided text. **Note:** The discussion shows some minor disagreement on the specifics of Application Insights' setup; however, the core consensus is that the agent method is the correct approach when aiming to minimize code changes.
473
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67544-exam-az-104-topic-4-question-65-discussion/) You have an Azure subscription. You deploy a virtual machine scale set that is configured as shown in the following exhibit. (Image showing VM scale set configuration with scale-out and scale-in rules) Use the drop-down menus to select the answer choice that answers each question based on the information presented in the graphic. NOTE: Each correct selection is worth one point. (Image showing a hot area with two boxes to fill in answers for "Number of VMs at 11:00 AM" and "Number of VMs at 11:10 AM"). **
** The most likely correct answer is: Box 1: 3, Box 2: 1. However, there is significant disagreement and ambiguity in the discussion regarding the interpretation of the scale-in rule's behavior, specifically around the lack of explicit duration or cooldown period between scale-in events. **Explanation:** The provided image displays a scale-out rule triggering when CPU utilization exceeds 70% and a scale-in rule triggering when CPU utilization falls below 25% for 10 minutes. The scale-in rule decreases the instance count by 1 every 10 minutes. Starting with 5 VMs at 10:00 AM, assuming the CPU utilization consistently falls below 25% from 10:00 AM to 11:00 AM, the scale-in rule would reduce the VM count by 6 (6 x 10-minute intervals), leading to a negative number of VMs, which is impossible. The scale-in process would stop at the minimum instance count, which is 1 in the configuration. Therefore, at 11:00 AM, there would be 1 VM. At 11:10 AM, with the CPU remaining below 25%, the scale-in rule would attempt to reduce the count again but would be limited by the minimum instance count of 1. Hence at 11:10 AM there would still be 1 VM. **Why other options are incorrect:** The discussion highlights several alternative interpretations of the scale-in rule’s behavior, largely stemming from the absence of explicit information regarding a cooldown period between scale-in events. Some suggest a constant decrease of VMs over intervals, potentially leading to 3 and 2 as answers. Other interpretations are possible due to the ambiguity. **Note:** The provided answer is a best-guess based on the information presented. The ambiguity in the scale-in rule's description makes it difficult to state definitively that this interpretation is absolutely correct. The most-voted answer from the discussion was accepted as likely correct.
474
[View Question](https://www.examtopics.com/discussions/databricks/view/67545-exam-az-104-topic-4-question-66-discussion/) You have web apps in the West US, Central US and East US Azure regions. You have the App Service plans shown in the following table. | Region | Operating System | App Service Plan | |----------------|--------------------|--------------------| | West US | Windows | ASP1 | | Central US | Windows | ASP2 | | East US | Windows | ASP3 | | West US | Linux | ASP4 | You plan to create an additional App Service plan named ASP5 that will use the Linux operating system. You need to identify in which of the currently used locations you can deploy ASP5. What should you recommend? A. West US, Central US, or East US B. Central US only C. East US only D. West US only
A. West US, Central US, or East US Explanation: The question states that you have web apps deployed in West US, Central US, and East US. The table shows that Linux-based App Service Plans *can* exist in the West US (ASP4). There is no information provided to suggest that Linux App Service Plans are restricted to specific regions. Therefore, ASP5 can be deployed in any of the three regions where web apps already exist. Why other options are incorrect: * **B, C, and D:** These options incorrectly restrict the deployment of ASP5 to only one region. The provided information does not support such a limitation. Note: The discussion surrounding this question highlights significant confusion and disagreement regarding the question's validity and purpose. Some commenters considered it a "trap" or a "pointless question." One commenter suggested the question might be referencing an older limitation regarding Windows and Linux App Service Plans in the same resource group, but this is not explicitly stated in the question itself. The answer provided reflects the most logical interpretation based on the information given.
475
[View Question](https://www.examtopics.com/discussions/databricks/view/67546-exam-az-104-topic-4-question-67-discussion/) You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. the New-AzConfigurationAssignment cmdlet B. a Desired State Configuration (DSC) extension C. Azure Active Directory (Azure AD) Application Proxy D. Azure Application Insights
B. a Desired State Configuration (DSC) extension A Desired State Configuration (DSC) extension allows you to manage the configuration of your virtual machines. By using a DSC extension within your Azure Resource Manager template, you can ensure that NGINX is installed and configured correctly on all VMs in the scale set after deployment. Why other options are incorrect: * **A. the New-AzConfigurationAssignment cmdlet:** This cmdlet is used for managing Azure Automation State Configuration, not for directly managing the configuration of VMs within a scale set during deployment. * **C. Azure Active Directory (Azure AD) Application Proxy:** This service is used for providing secure remote access to on-premises applications, not for managing the software configuration of VMs. * **D. Azure Application Insights:** This service is used for monitoring and analyzing application performance, not for deploying and configuring software on VMs. Note: The discussion indicates that "Azure Custom Script Extension" is also a valid solution. This reflects a potential ambiguity or multiple correct answers in the original question source.
476
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67547-exam-az-104-topic-4-question-68-discussion/) You have an Azure subscription that contains the resources shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0032200001.png) In Azure Cloud Shell, you need to create a virtual machine by using an Azure Resource Manager (ARM) template. How should you complete the command? Select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0032300001.png) **
** Box 1: `New-AzResourceGroupDeployment` Box 2: `-ResourceGroupName RG1` **Explanation:** The question asks how to create a virtual machine using an ARM template in Azure Cloud Shell. The `New-AzResourceGroupDeployment` cmdlet is specifically designed for deploying resources from an ARM template to a resource group. The `-ResourceGroupName` parameter specifies the target resource group (RG1 in this case) where the VM will be created. Using `New-AzVm` without a template would require manually specifying all VM parameters in the command line, which is not what the question requests. **Why other options are incorrect:** The provided text does not offer alternative options to evaluate as incorrect. The question is a multiple choice/select-the-correct-option type, and only the options that correctly deploy using an ARM template are considered correct.
477
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67553-exam-az-104-topic-4-question-30-discussion/) You have an Azure subscription named Subscription1 that has the following providers registered: ✑ Authorization ✑ Automation ✑ Resources ✑ Compute ✑ KeyVault ✑ Network ✑ Storage ✑ Billing ✑ Web Subscription1 contains an Azure virtual machine named VM1 that has the following configurations: ✑ Private IP address: 10.0.0.4 (dynamic) ✑ Network security group (NSG): NSG1 ✑ Public IP address: None ✑ Availability set: AVSet ✑ Subnet: 10.0.0.0/24 ✑ Managed disks: No ✑ Location: East US You need to record all the successful and failed connection attempts to VM1. Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Enable Azure Network Watcher in the East US Azure region. B. Add an Azure Network Watcher connection monitor. C. Register the MicrosoftLogAnalytics provider. D. Create an Azure Storage account. E. Register the Microsoft.Insights resource provider. F. Enable Azure Network Watcher flow logs. **
** D, E, F To record all successful and failed connection attempts to VM1, you need to perform the following three actions: * **D. Create an Azure Storage account:** Network Watcher flow logs require a storage account to store the collected data. This is where the connection logs will be written. * **E. Register the Microsoft.Insights resource provider:** This provider is required for Network Watcher to function correctly and send data to Log Analytics. Flow logs leverage this for data collection and analysis. * **F. Enable Azure Network Watcher flow logs:** This directly enables the logging of network traffic to and from the VM, capturing both successful and failed connection attempts. **Why other options are incorrect:** * **A. Enable Azure Network Watcher in the East US Azure region:** While Network Watcher is necessary, it's likely already enabled given the existing services. The question focuses on *actions* to take to achieve the logging goal, not prerequisite service enablement. The provided text hints that Network Watcher is automatically enabled when a virtual network is created. * **B. Add an Azure Network Watcher connection monitor:** Connection monitors are used for different purposes, such as checking the reachability of resources, not for comprehensively logging all connection attempts. * **C. Register the MicrosoftLogAnalytics provider:** While Log Analytics can be used to view and analyze network data, registering the provider is less critical than the other actions as the Network Watcher uses the Microsoft.Insights resource provider to send data. **Note:** The suggested answer (DEF) aligns with the above explanation. The discussion points to Network Watcher being automatically enabled. However, the question focuses on the actions required to get the logging of connection attempts, not the initial configuration or enablement of Network Watcher itself.
478
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67554-exam-az-104-topic-4-question-27-discussion/) HOTSPOT - You plan to deploy an Azure container instance by using the following Azure Resource Manager template. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0026400001.png) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the template. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0026500001.jpg) **
** Based on the provided suggested answer image (https://www.examtopics.com/assets/media/exam-media/04223/0026500002.jpg), which is marked as correct by multiple users in the discussion, the answers are: 1. **Internet users can connect to the container from any device:** This is because the provided Azure Resource Manager template (not shown directly but implied from the context) likely lacks explicit inbound network restrictions, allowing connections from any device with access to the internet. The discussion confirms this interpretation. 2. **If Internet Information Services (IIS) in the container fails, the container will restart automatically:** This implies that the container is configured with a restart policy that automatically restarts it upon failure of a critical service like IIS. The discussion comments support this. **Why other options are incorrect:** There are no other options explicitly provided in the question. The provided images and discussion only give a suggested answer which is marked correct. Any other options would be hypothetical and not part of the original question. **Note:** The actual Azure Resource Manager template is not shown in the provided text, making a definitive answer dependent on the contents of the image. However, the discussion suggests a consensus on the provided suggested answer.
479
[View Question](https://www.examtopics.com/discussions/databricks/view/67601-exam-az-104-topic-9-question-1-discussion/) HOTSPOT - You need to ensure that User1 can create initiative definitions, and User4 can assign initiatives to RG2. The solution must meet the technical requirements. Which role should you assign to each user? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0013700001.png) ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0013800001.png)
The provided images show a multiple choice question with a "hotspot" answer area. Based on the suggested answer image, User1 should be assigned the **Policy Definition Contributor** role, and User4 should be assigned the **Resource Policy Contributor** role. * **User1 (Create initiative definitions):** The Policy Definition Contributor role allows the creation and management of policy definitions. Initiatives are built upon policy definitions, so this role grants the necessary permissions. * **User4 (Assign initiatives to RG2):** The Resource Policy Contributor role allows assignment of policies (and by extension, initiatives) to a specific resource group (RG2 in this case). The discussion highlights some disagreement about the correct answer. `meet_satish` states that a Contributor role cannot create or update definitions and assignments, which is consistent with the solution showing more granular roles are needed. Other options are not specified, but would likely be other Azure roles. These other roles would lack the precise permissions required to either create policy definitions or assign them to a specific resource group. For example, a generic contributor role might be too broad or too narrow for the specific tasks required.
480
[View Question](https://www.examtopics.com/discussions/databricks/view/67626-exam-az-104-topic-5-question-85-discussion/) You have five Azure virtual machines that run Windows Server 2016. The virtual machines are configured as web servers. You have an Azure load balancer named LB1 that provides load balancing services for the virtual machines. You need to ensure that visitors are serviced by the same web server for each request. What should you configure? A. Session persistence to Client IP and protocol B. Protocol to UDP C. Session persistence to None D. Floating IP (direct server return) to Enabled
The correct answer is **A. Session persistence to Client IP and protocol**. This option ensures that subsequent requests from the same client IP address using the same protocol will always be directed to the same virtual machine. This achieves the goal of ensuring each visitor is serviced by the same web server for each request. Why other options are incorrect: * **B. Protocol to UDP:** Changing the protocol to UDP is irrelevant to session persistence and load balancing for web servers which typically use TCP. * **C. Session persistence to None:** This option disables session persistence, meaning requests from the same client may be handled by different virtual machines, contradicting the requirement. * **D. Floating IP (direct server return) to Enabled:** While floating IPs can provide high availability, they don't inherently guarantee that a client will always be directed to the same server for each request. This approach would typically be used for a single server, whereas this question specifies a load-balanced setup. Note: The discussion shows overwhelming agreement that option A is the correct answer.
481
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67627-exam-az-104-topic-5-question-86-discussion/) You have an Azure subscription that uses the public IP addresses shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0048300001.png) You need to create a public Azure Standard Load Balancer. Which public IP addresses can you use? A. IP1, IP2, and IP3 B. IP2 only C. IP3 only D. IP1 and IP3 only **
** C. IP3 only A Standard Load Balancer in Azure requires a Standard SKU Public IP address. Based on the image (which is not included here but was present in the original prompt), only IP3 is listed as having a Standard SKU. IP1 and IP2 are Basic SKU Public IP addresses, which are not compatible with Standard Load Balancers. **Why other options are incorrect:** * **A. IP1, IP2, and IP3:** This is incorrect because IP1 and IP2 are Basic SKU Public IPs and are incompatible with a Standard Load Balancer. * **B. IP2 only:** This is incorrect because IP2 is a Basic SKU Public IP and is incompatible with a Standard Load Balancer. * **D. IP1 and IP3 only:** This is incorrect because IP1 is a Basic SKU Public IP and is incompatible with a Standard Load Balancer. **Note:** The provided discussion shows a strong consensus on answer C. There is no significant disagreement presented in the discussion.
482
**** [View Question](https://www.examtopics.com/discussions/databricks/view/67644-exam-az-104-topic-3-question-39-discussion/) You have an Azure subscription that contains an Azure Storage account named storageaccount1. You export storageaccount1 as an Azure Resource Manager template. The template contains the following sections: ``` { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "storageAccountName": { "type": "string", "metadata": { "description": "Name of the storage account" } } }, "variables": { }, "resources": [ { "apiVersion": "2021-09-01", "name": "[parameters('storageAccountName')]", "type": "Microsoft.Storage/storageAccounts", "location": "WestUS", "sku": { "name": "Standard_LRS", "tier": "Standard" }, "kind": "StorageV2", "properties": { "networkRules": { "defaultAction": "Allow" }, "ipRules": { "ipAddressOrRange": [ ] }, "virtualNetworkRules": { "subnetRules": [ ] }, }, } ] } ``` For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 1. By default, network traffic to storageaccount1 is allowed. 2. Individual blobs in storageaccount1 can be set to the archive tier. 3. A user can access blob data in the Azure portal with Azure AD credentials without any role assignments. **
** 1. **Yes**. The provided ARM template shows `"defaultAction": "Allow"` within the `networkRules` section. This means that by default, network traffic is permitted to the storage account unless explicitly denied by other rules. 2. **Yes**. While there's discussion about the subtleties of archive tiers and StorageV2, the general consensus and Microsoft documentation support the ability to set individual blobs to the archive tier within a StorageV2 account. There is disagreement among users regarding the technicalities of this feature, but the prevailing viewpoint, supported by Microsoft documentation, affirms the statement's truth. 3. **No**. Accessing blob data in the Azure portal with Azure AD credentials requires specific role assignments, such as the Storage Blob Data Contributor role or equivalent, along with Azure Resource Manager Reader role for managing the resource. Accessing resources in Azure without the necessary permissions is not possible. **Why other options are incorrect:** The explanations above detail why each answer is correct or incorrect. The discussion highlights some nuanced debate around the archive tier and StorageV2, but the overall consensus and available documentation support the given answers.
483
[View Question](https://www.examtopics.com/discussions/databricks/view/67686-exam-az-104-topic-6-question-8-discussion/) You have an Azure virtual machine named VM1. You use Azure Backup to create a backup of VM1 named Backup1. After creating Backup1, you perform the following changes to VM1: ✑ Modify the size of VM1. ✑ Copy a file named Budget.xls to a folder named Data. ✑ Reset the password for the built-in administrator account. ✑ Add a data disk to VM1. An administrator uses the Replace existing option to restore VM1 from Backup1. You need to ensure that all the changes to VM1 are restored. Which change should you perform again? A. Modify the size of VM1. B. Reset the password for the built-in administrator account. C. Add a data disk. D. Copy Budget.xls to Data.
The correct answer is **C and D**. Restoring a VM using the "Replace existing" option in Azure Backup will not restore changes made after the backup was created, such as adding a data disk (C) or copying a new file (D). The VM size and administrator password changes will persist after the restore. The discussion highlights testing that confirms this. The suggested answer (D) is only partially correct. WHY OTHER OPTIONS ARE INCORRECT: * **A. Modify the size of VM1:** The VM size change *will* be retained after the restore. * **B. Reset the password for the built-in administrator account:** The password reset *will* be retained after the restore. NOTE: There is disagreement in the discussion regarding the correct answer. While the suggested answer is D, practical testing in Azure suggests that both C and D need to be redone after the restore.
484
**** [View Question](https://www.examtopics.com/discussions/databricks/view/73878-exam-az-104-topic-2-question-47-discussion/) You have an Azure Active Directory (Azure AD) tenant. You plan to delete multiple users by using Bulk delete in the Azure Active Directory admin center. You need to create and upload a file for the bulk delete. Which user attributes should you include in the file? A. The user principal name and usage location of each user only B. The user principal name of each user only C. The display name of each user only D. The display name and usage location of each user only E. The display name and user principal name of each user only **
** B. The user principal name of each user only. **Explanation:** To perform a bulk delete of users in Azure Active Directory using the bulk delete feature, the only required attribute in the uploaded CSV file is the User Principal Name (UPN) of each user. The UPN uniquely identifies each user account within Azure AD and is essential for the system to correctly identify and delete the intended accounts. Including additional attributes like display name or usage location is not necessary for the bulk delete operation itself, though it may be useful for record-keeping purposes outside of the deletion process. **Why other options are incorrect:** * **A, D, and E:** These options include unnecessary attributes (usage location and/or display name). While including additional data might be useful for tracking, it's not required for the bulk delete function. The UPN alone is sufficient. * **C:** This option only includes the display name, which is not a unique identifier and therefore insufficient for reliably targeting users for deletion. **Note:** The discussion shows unanimous agreement on the correct answer.
485
**** [View Question](https://www.examtopics.com/discussions/databricks/view/73882-exam-az-104-topic-2-question-52-discussion/) You have an Azure Load Balancer named LB1. You assign a user named User1 the roles shown in the following exhibit. Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0010800001.jpg) *(This image shows a screenshot of Azure role assignments for User1. The exact content of the image is not provided in the text, but it's crucial to the question.)* Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0010900001.jpg) *(This image shows the answer selection area. The exact options are not provided in the text, but it's crucial to the question.)* **
** The provided text does not give the exact options in the dropdown menus. Therefore, a complete answer cannot be provided. However, based on the discussion, the correct answers would likely be: 1. **User1 can "assign access to other users for" LB1:** This is because the role assignments (shown in the missing image) likely grant User1 at least Contributor level access to LB1. A Contributor role allows managing access to resources. 2. **User1 can "delete a virtual machine from" the resource group:** This is because the role assignments likely include a role with permissions to manage virtual machines within the resource group where LB1 resides. Again, this depends on the missing image showing the assigned roles. **Why other options are incorrect (hypothetical):** Without knowing the specific options presented in the dropdown menus, it's impossible to definitively state why other options are incorrect. However, any option suggesting User1 could perform actions exceeding the permissions granted by the assigned roles (as shown in the missing image) would be incorrect. For example, if an option stated User1 could *manage subscriptions*, this would likely be incorrect unless the user was assigned a role with subscription-level administrative permissions (unlikely based on the context of a single load balancer). **Note:** The answer is highly dependent on the content of the missing images showing the role assignments and the dropdown options. The discussion strongly suggests a consensus on the two statements provided as the answer, but without the visuals, this remains an incomplete answer.
486
**** [View Question](https://www.examtopics.com/discussions/databricks/view/73971-exam-az-104-topic-5-question-91-discussion/) You have an Azure subscription that contains two virtual machines named VM1 and VM2. You create an Azure load balancer. You plan to create a load balancing rule that will load balance HTTPS traffic between VM1 and VM2. Which two additional load balancer resources should you create before you can create the load balancing rule? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. a frontend IP address B. an inbound NAT rule C. a virtual network D. a backend pool E. a health probe **
** D and E: a backend pool and a health probe. To create a load balancing rule for HTTPS traffic between VM1 and VM2, you need a backend pool to specify the target VMs (VM1 and VM2) and a health probe to monitor the health of these VMs. The load balancer needs to know which VMs to direct traffic to (backend pool) and how to determine if those VMs are healthy (health probe) before routing traffic. **Why other options are incorrect:** * **A. a frontend IP address:** While a frontend IP address is required for a load balancer to function, the question states that a load balancer *has already been created*. The discussion highlights a disagreement on whether a frontend IP is *required* at the time of load balancer creation, with some arguing it can be added later. Regardless, the question focuses on what's needed *before* creating the *rule*, not the load balancer itself. A frontend IP is already assumed to exist because a load balancer was already created. * **B. an inbound NAT rule:** Inbound NAT rules are not directly involved in load balancing rules. Load balancers handle the NAT process implicitly. * **C. a virtual network:** The virtual network is a prerequisite for creating the VMs and the load balancer, but it's not directly relevant to creating the load balancing rule itself; it's assumed to be already configured given that the VMs and the load balancer already exist. **Note:** The discussion shows some disagreement regarding whether a frontend IP address is required *before* creating a load balancing rule, as opposed to before creating the Load Balancer itself. The consensus from the discussion leans towards a frontend IP already existing since the load balancer is already created. The question, however, clearly focuses on what's needed *before* creating the *rule*. Therefore, the focus should be placed on the backend pool and health probe, which are undeniably necessary components.
487
**** [View Question](https://www.examtopics.com/discussions/databricks/view/73975-exam-az-104-topic-6-question-12-discussion/) You have an Azure subscription that contains an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant is synced to the on-premises Active Directory domain. The domain contains the users shown in the following table. ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0054900001.png) You enable self-service password reset (SSPR) for all users and configure SSPR to have the following authentication methods: * Number of methods required to reset: 2 * Methods available to users: Mobile phone, Security questions * Number of questions required to register: 3 * Number of questions required to reset: 3 You select the following security questions: * What is your favorite food? * In what city was your first job? * What was the name of your first pet? For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: ![Image](https://www.examtopics.com/assets/media/exam-media/04223/0055000001.png) **
** The provided image shows a table requiring "Yes" or "No" answers to statements about user password reset capabilities. The correct answers are dependent on the specific statements presented in the missing image. However, based on the provided text and the discussion, we can infer some aspects of the correct answers. The discussion highlights that administrators are subject to a "two-gate" policy, meaning they require two authentication methods and cannot use security questions for password resets. Therefore, any statement suggesting an administrator can use security questions for password reset would be answered with "No," while statements about other users needing two methods would likely be "Yes," provided they have registered the required methods. Statements about users meeting the method requirements will be "Yes" or "No" according to whether the user meets the criteria. **Why other options are incorrect:** Without the content of the statements in the "Hot Area" image, it's impossible to definitively say why specific options are incorrect. However, any answer contradicting the described two-gate policy for administrators or the user configuration (requiring two authentication methods, 3 security questions registered and 3 used to reset) would be incorrect. **Note:** The provided solution does not include the answers to the individual statements from the Hot Area image. The accuracy of the answers depends entirely on the content of the missing image which is needed for a complete response. The discussion highlights a potential area of conflict; it notes the administrator policy differs from user policies, but does not detail what those are. This ambiguity necessitates clarifying the missing information for an absolute correct answer.
488
[View Question](https://www.examtopics.com/discussions/databricks/view/73994-exam-az-104-topic-3-question-44-discussion/) HOTSPOT - You have an Azure Storage account named storage1 that stores images. You need to create a new storage account and replicate the images in storage1 to the new account by using object replication. How should you configure the new account? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Based on the provided suggested answer image (which is not directly included in the text but referenced as existing), the correct configuration for the new storage account would involve selecting "StorageV2" or "BlobStorage" as the account type and specifying the creation of a container for the replicated objects. The exact options selected within the hotspot are not provided in the text but are inferred from the suggested answer image. The discussion clarifies that only StorageV2 or BlobStorage account types are appropriate for object replication. Other account types would not support this feature. The creation of a container is necessary because object replication works by replicating objects within containers. Note: The exact options within the hotspot are unavailable without access to the image. The answer is based on the interpretation of the discussion and the mention of a suggested answer image depicting the correct selections. There is no indication of conflicting opinions in the provided discussion.
489
**** [View Question](https://www.examtopics.com/discussions/databricks/view/74021-exam-az-104-topic-2-question-53-discussion/) You have an Azure subscription named Subscription1 that contains a virtual network named VNet1. VNet1 is in a resource group named RG1. Subscription1 has a user named User1. User1 has the following roles: ✑ Reader ✑ Security Admin ✑ Security Reader You need to ensure that User1 can assign the Reader role for VNet1 to other users. What should you do? A. Remove User1 from the Security Reader role for Subscription1. Assign User1 the Contributor role for RG1. B. Assign User1 the Owner role for VNet1. C. Assign User1 the Contributor role for VNet1. D. Assign User1 the Network Contributor role for VNet1. **
** B. Assign User1 the Owner role for VNet1. The Owner role provides full access to manage all resources within VNet1, including the ability to assign roles using Azure Role-Based Access Control (Azure RBAC). This directly addresses the requirement of allowing User1 to assign the Reader role to other users for VNet1. **Why other options are incorrect:** * **A. Remove User1 from the Security Reader role for Subscription1. Assign User1 the Contributor role for RG1.** The Contributor role at the Resource Group level (RG1) doesn't grant the necessary permissions to assign roles *within* VNet1. Removing the Security Reader role is irrelevant to the problem. * **C. Assign User1 the Contributor role for VNet1.** The Contributor role allows management of resources but *does not* grant permission to assign roles. * **D. Assign User1 the Network Contributor role for VNet1.** The Network Contributor role allows management of network resources but doesn't grant the permission to assign roles to other users. **Note:** The discussion shows a consensus that option B is correct.
490
**** [View Question](https://www.examtopics.com/discussions/databricks/view/74076-exam-az-104-topic-6-question-11-discussion/) You have two Azure App Service apps named App1 and App2. Each app has a production deployment slot and a test deployment slot. The Backup Configuration settings for the production slots are shown in the following table (see image below). For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. (The image shows a table with App1 and App2 listed, both having a "0" for "How many days to keep a backup before automatically deleting it. Set to 0 for indefinite retention.") **
** The question and provided image are missing from your prompt; therefore, a definitive answer cannot be provided. However, based on the discussion, the answer would likely be: * **App1:** Yes (indefinite retention, since 0 is set) * **App2:** Yes (indefinite retention, since 0 is set) **Explanation:** The discussion clarifies that a setting of "0" for backup retention means indefinite retention. Therefore, backups for both App1 and App2 will be kept indefinitely as their configuration specifies 0 days for retention. The discussion also points out that only production slot backups are created if a slot isn't explicitly specified in the command or API call. Therefore only the production slots are relevant to this question. **Why other options are incorrect (hypothetical):** Any answer suggesting a finite number of days for retention for either App1 or App2 would be incorrect because both apps are configured for indefinite retention (0 days). Any answer that includes the test slots would also be incorrect because the question focuses solely on the production slots' backup configuration. **Note:** The complete question requires the image containing the statements to be evaluated. The provided text only gives context to interpret the possible answers but not the statements themselves.
491
[View Question](https://www.examtopics.com/discussions/databricks/view/74098-exam-az-104-topic-4-question-38-discussion/) You have an Azure subscription named Subscription1 that contains the following resource group: ✑ Name: RG1 ✑ Region: West US ✑ Tag: `tag1`: `value1` You assign an Azure policy named Policy1 to Subscription1 by using the following configurations: ✑ Exclusions: None ✑ Policy definition: Append a tag and its value to resources ✑ Assignment name: Policy1 ✑ Parameters: ✑ Tag name: tag2 Tag value: value2 After Policy1 is assigned, you create a storage account that has the following configuration: ✑ Name: storage1 ✑ Location: West US ✑ Resource group: RG1 ✑ Tags: `tag3`: `value3` You need to identify which tags are assigned to each resource. What should you identify? Select the appropriate options.
* **RG1:** `tag1`: `value1` only. * **storage1:** `tag2`: `value2` and `tag3`: `value3` The Azure policy is configured to *append* a tag, meaning it only adds tags to resources that don't already have them. RG1 already exists with the tag `tag1: value1`, so the policy's `tag2: value2` is not added. The storage account `storage1`, however, is newly created, so the policy appends `tag2: value2`. It also retains its originally assigned tag `tag3: value3`. Importantly, tags applied to the resource group are not inherited by the resources within that resource group. The discussion highlights a key point: the policy's "Append" functionality is not retroactive. It only applies to new or updated resources, not existing ones. Therefore, the existing tags on RG1 remain unchanged.
492
**** [View Question](https://www.examtopics.com/discussions/databricks/view/74142-exam-az-104-topic-5-question-97-discussion/) You have an Azure subscription named Subscription1. Subscription1 contains the virtual machines shown in the table below: | VM Name | IP Address | Status | |---|---|---| | VM1 | 10.0.1.4 | Running | | VM2 | 10.0.2.4 | Running | | VM3 | 10.0.0.4 | Running | Subscription1 contains a virtual network named VNet1 that has the subnets shown in the table below: | Subnet Name | Address Range | |---|---| | Subnet1 | 10.0.1.0/24 | | Subnet2 | 10.0.2.0/24 | | Subnet3 | 10.0.0.0/24 | VM3 has multiple network adapters, including a network adapter named NIC3. IP forwarding is enabled on NIC3. Routing is enabled on VM3. You create a route table named RT1 that contains the routes shown in the table below: | Next Hop Type | Next Hop IP Address | Address Prefix | |---|---|---| | Virtual Appliance | 10.0.0.4 | 10.0.2.0/24 | You apply RT1 to Subnet1 and Subnet2. For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. The routing table allows connections from VM3 to VM1 and VM2. 2. The routing table allows connections from VM2 to VM1. 3. The routing table allows connections from VM1 to VM2. **
** 1. **Yes:** The routing table RT1 is applied to Subnet1 and Subnet2, and it's not applied to Subnet3 where VM3 resides. VM3's connection to VM1 and VM2 will rely on the default VNET routing between subnets. Since all VMs are in the same VNET, the default routing allows VM3 to connect to VM1 and VM2. IP forwarding on VM3 is not needed for this scenario specifically because the default routing within the same VNET is sufficient. There is a route in RT1 that affects connections *to* Subnet2. This however does not hinder outbound traffic from VM3 to VM1 or VM2. 2. **No:** The route table RT1 directs traffic destined for 10.0.2.0/24 (where VM2 resides) to VM3 (10.0.0.4) as the next hop. However, the route table is applied to Subnet2 (where VM2 is located), and VM3's IP forwarding is *not* configured for all the networks involved. Although VM3 has IP forwarding enabled, it is not correctly configured in the network topology for this to work. VM2's traffic is directed to VM3 which does not forward because there is no route configured in the route table for VM2 to reach VM1. 3. **Yes:** VM1 (Subnet1) can reach VM2 (Subnet2) through VM3 because, although RT1 directs traffic destined for 10.0.2.0/24 to VM3, the enabling of IP forwarding on VM3 allows VM3 to forward traffic from VM1 to VM2. **Why other options are incorrect:** The discussion shows some disagreement, with some users initially misinterpreting the impact of IP forwarding and route table application. The key is understanding that while IP forwarding on VM3 is enabled, it's the combination of route tables and subnet configurations that determines connectivity. Simply enabling IP forwarding doesn't guarantee connectivity if the routes aren't properly configured. Some responses incorrectly assume that simply being in the same VNET guarantees connectivity regardless of route table configurations.
493
[View Question](https://www.examtopics.com/discussions/databricks/view/74289-exam-az-104-topic-1-question-26-discussion/) Your company's Azure subscription includes two Azure networks named VirtualNetworkA and VirtualNetworkB. VirtualNetworkA includes a VPN gateway that is configured to use static routing. A site-to-site VPN connection exists between your company's on-premises network and VirtualNetworkA. You have configured a point-to-site VPN connection to VirtualNetworkA from a workstation running Windows 10. After configuring virtual network peering between VirtualNetworkA and VirtualNetworkB, you confirm that you are able to access VirtualNetworkB from the company's on-premises network. However, you cannot establish a connection to VirtualNetworkB from the Windows 10 workstation. You must ensure a connection to VirtualNetworkB can be established from the Windows 10 workstation. Solution: You download and re-install the VPN client configuration package on the Windows 10 workstation. Does the solution meet the goal? A. Yes B. No
B. No The suggested solution of reinstalling the VPN client configuration package on the Windows 10 workstation will not solve the connectivity problem to VirtualNetworkB. While reinstalling the client might refresh the connection to VirtualNetworkA, the issue lies in the routing between VirtualNetworkA and VirtualNetworkB from the Windows 10 workstation's perspective. The on-premises network can access VirtualNetworkB, indicating the peering is correctly configured. The problem is specifically with the client's routing configuration after the peering was established. Reinstalling the client doesn't update the routing tables or address the underlying networking issue. The client needs updated routing information to reach VirtualNetworkB after the peering. The discussion shows a disagreement on the correct answer. Some users believe reinstalling the VPN client will resolve the issue, while others correctly identify that it will not address the routing problem between the virtual networks.
494
**** [View Question](https://www.examtopics.com/discussions/databricks/view/75822-exam-az-104-topic-6-question-13-discussion/) You have an Azure subscription that contains the following users in an Azure Active Directory tenant named contoso.onmicrosoft.com: [Image](https://www.examtopics.com/assets/media/exam-media/04223/0055100001.png) User1 creates a new Azure Active Directory tenant named external.contoso.onmicrosoft.com. You need to create new user accounts in external.contoso.onmicrosoft.com. Solution: You instruct User1 to create the user accounts. Does that meet the goal? A. Yes B. No **
** A. Yes **Explanation:** User1 is already identified as someone with the permissions to create a new Azure Active Directory tenant (external.contoso.onmicrosoft.com). Instructing User1 to create user accounts within this *newly created* tenant is a valid solution, assuming User1 has the necessary permissions within the *new* tenant. The question focuses on whether instructing User1 is a valid solution to create accounts in the new tenant, not on whether User1 inherently *has* the permissions. The provided solution presumes that the act of creating the tenant implicitly grants User1 the necessary permissions to manage users within that tenant. **Why other options are incorrect:** B. No: This is incorrect based on the assumption within the provided "Solution". The solution assumes User1 has the necessary permissions in the newly created tenant. The question doesn't provide information to disprove this assumption. **Note:** The discussion highlights disagreement, with one user expressing disbelief ("ARE YOU GUYS HIGH??") at the suggested answer. This suggests a potential ambiguity or missing information in the original problem statement concerning User1's permissions within the *new* tenant, external.contoso.onmicrosoft.com. The answer provided here assumes the solution's implicit claim that User1 has the necessary permissions. A more complete question would clarify User1's role and permissions within the new tenant.