Timed Mode Bonus Test – AZ-104 Azure Administrator Flashcards

1
Q

You are managing an Azure Subscription named Tagaytay-Subscription. The subscription has multiple resource groups that are used by three departments in your organization.

You have been asked to send a usage report of each department to the accounting department.

Which four actions should you perform in sequence?

Instructions: To answer, drag the appropriate item from the column on the left to its description on the right. Each correct match is worth one point.

A. Apply a tag to each Azure resource
B. Download the usage data
C. Filter the items by tag
D, Navigate to cost analysis and select a scope

1
	
2
	
3
	
4
A
  1. A. Apply a tag to each Azure resource
  2. D, Navigate to cost analysis and select a scope
  3. C. Filter the items by tag
  4. B. Download the usage data

Explanation:
Azure Cost Management + Billing is a suite of tools provided by Microsoft that helps you analyze, manage, and optimize the costs of your workloads. Using the suite helps ensure that your organization is taking advantage of the benefits provided by the cloud.

You use Azure Cost Management + Billing features to:

Conduct billing administrative tasks such as paying your bill
Manage billing access to costs
Download cost and usage data that was used to generate your monthly invoice
Proactively apply data analysis to your costs
Set spending thresholds
Identify opportunities for workload changes that can optimize your spending

You apply tags to your Azure resources, resource groups, and subscriptions to logically organize them into a taxonomy. Each tag consists of a name and a value pair. For example, you can apply the name Environment and the value Production to all the resources in production.

To download the usage report of each department, you must first assign a tag for each resource. These tags would help you filter the view in cost analysis. Take note that if you assign a tag by resource group, you won’t be able to classify which department uses that resource since each department uses resources in different resource groups.

If you’ve already assigned tags to your resources, you can go to Cost Manage + Billing and open the scope in the Azure portal, and select Cost analysis in the menu. Add a filter and select filter by “Tag”. Then download it by selecting Export and selecting Download data to CSV or Download data to Excel. The Excel download provides more context on the view you used to generate the download, like scope, query configuration, total, and date generated.

Hence, the correct sequence is:

  1. Apply a tag to each Azure resource
  2. Navigate to cost analysis and select a scope
  3. Filter the items by tag
  4. Download the usage data

References:

https://docs.microsoft.com/en-us/azure/cost-management-billing/costs/quick-acm-cost-analysis?tabs=azure-portal

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/tag-resources?tabs=json

Check out this Azure Pricing Cheat Sheet:

https://tutorialsdojo.com/azure-pricing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your company has an Azure Subscription that contains a resource group named TD-Cebu.

TD-Cebu contains the following resources:
AZ104-D-02 question imageWhat should you do first to delete the TD-Cebu resource group?

A. Delete all the resource lock and backup data in TD-RSV.
B. Stop TD-VM and delete the resource lock of TD-VNET.
C. Change the resource lock type of TD-VNET and modify the backup configuration of TD-VM.
D> Set the resource lock of TD-SA to Delete.

A

A. Delete all the resource lock and backup data in TD-RSV.

Explanation:
Incorrect

A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data while minimizing management overhead.

In order to delete the TD-Cebu resource group, you must first delete/remove the following:

  1. Resource Lock

– If the lock level is set to Delete or Read-only, the users in your organization are prevented from accidentally deleting or modifying critical resources. The lock overrides any permissions the user might have.

  1. Backup data in Recovery Services vault

– If you try to delete a vault that contains backup data, you’ll encounter a message: “Vault cannot be deleted as there are existing resources within the vault. Please ensure there are no backup items, protected servers, or backup management servers associated with this vault.”

After you deleted the lock and backup data, you can now delete the TD-Cebu resource group.

Hence, the correct answer is: Delete all the resource lock and backup data in TD-RSV.

The option that says: Stop TD-VM and delete the resource lock of TD-VNET is incorrect because you must also delete the backup data of TD-RSV to delete the resource group. Take note that you can’t delete a vault that contains backup data.

The option that says: Set the resource lock of TD-SA to Delete is incorrect because even if you change the resource lock of TD-SA, you still won’t be able to delete the TD-Cebu resource group. You must first delete all the resource lock and backup data in TD-RSV to delete the resource group.

The option that says: Change the resource lock type of TD-VNET and modify the backup configuration of TD-VM is incorrect because changing the lock type of TD-VNET to Delete or Read-only still won’t allow you to delete the resource group. To accomplish the requirements in the scenario, you need to remove the resource lock and delete all the backup data in TD-RSV.

References:

https://docs.microsoft.com/en-us/azure/backup/backup-azure-delete-vault

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources?tabs=json

Check out this Azure Virtual Machines Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You have an Azure subscription that contains a subscription named TDSub1.

There is a requirement to assess your network infrastructure using Azure Network Watcher. You plan to do the following activities:

Capture information about the IP traffic going to and from a network security group.

Diagnose connectivity issues to or from an Azure virtual machine

Which feature should you use for each activity?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

1. Capture information about the IP traffic going to and from a network security group:  A. IP Flow Verify B. NSG Flow Lops C. Next Hop D. Traffic Analysis
  1. Diagnose connectivity issues to an Azure virtual machine:
    A. IP Flow Verify
    B. Next Hop
    C. Traffic Analytics
    D. NSG Flows Logs
A
  1. B. NSG Flow Logs
  2. A. IP Flow Verify

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.

Network security group (NSG) flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an NSG. Flow data is sent to Azure Storage accounts from where you can access it as well as export it to any visualization tool, SIEM, or IDS of your choice.

Flow logs are the source of truth for all network activity in your cloud environment. Whether you’re an upcoming startup trying to optimize resources or a large enterprise trying to detect intrusion, Flow logs are your best bet. You can use it for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions, and more.

IP flow verify checks if a packet is allowed or denied to or from a virtual machine. If the packet is denied by a security group, the name of the rule that denied the packet is returned.

IP flow verify looks at the rules for all Network Security Groups (NSGs) applied to the network interface, such as a subnet or virtual machine NIC. Traffic flow is then verified based on the configured settings to or from that network interface. IP flow verify is useful in confirming if a rule in a Network Security Group is blocking ingress or egress traffic to or from a virtual machine.

Therefore, you have to use the NSG flow logs to capture information about the IP traffic going to and from a network security group.

Conversely, to diagnose connectivity issues to or from an Azure virtual machine, you need to use IP flow verify.

Next hop is incorrect because this simply helps you determine if traffic is being directed to the intended destination, or whether the traffic is being sent nowhere.

Traffic analytics is incorrect because this just allows you to process your NSG Flow Log data that enables you to visualize, query, analyze, and understand your network traffic.

References:

https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-monitoring-overview

https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-ip-flow-verify-overview

https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview

Check out this Azure Virtual Network Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-network-vnet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an Azure subscription that contains hundreds of network resources.

You need to recommend a solution that will allow you to monitor resources in one centralized console for network monitoring.

What solution should you recommend?

A. Azure Traffic Manager
B. Azure Virtual Network
C. Azure Monitor Network Insights
D. Azure Advisor

A

C. Azure Monitor Network Insights

Explanation:
Azure Monitor maximizes the availability and performance of your applications and services by delivering a solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on.

Azure Monitor Network Insights provides a comprehensive view of health and metrics for all deployed network resources without requiring any configuration. It also provides access to network monitoring capabilities like Connection Monitor, flow logging for network security groups (NSGs), and Traffic Analytics. And it provides other network diagnostic features. Key features of Network Insight:

– Single console for network monitoring

– No agent configuration required

– Access to health state, metrics, alerts, & data from traffic and connectivity monitoring tools in one place

– View network topology with functional dependencies for simpler troubleshooting

– Access resources metrics to debug issues without writing queries or authoring workbooks

Hence, the correct answer is: Azure Monitor Network Insights.

Azure Virtual Network is incorrect because this service simply allows your resources, such as virtual machines, to securely communicate with each other, the internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own data center but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.

Azure Traffic Manager is incorrect because this is simply a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions while providing high availability and responsiveness. However, you cannot use this to monitor your network resources.

Azure advisor is incorrect because this service just helps you improve the cost-effectiveness, performance, reliability (formerly called high availability), and security of your Azure resources.

References:

https://docs.microsoft.com/en-us/azure/azure-monitor/

https://docs.microsoft.com/en-us/azure/azure-monitor/insights/network-insights-overview

Check out this Azure Monitor Cheat Sheet:

https://tutorialsdojo.com/azure-monitor/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your company plans to migrate your on-premises servers to Azure.

There is a requirement wherein the users must use the suffix of @tutorialsdojo.com instead of tutorialsdojo.onmicrosoft.com domain name.

Which four actions should you perform in sequence?

Instructions: To answer, drag the appropriate item from the column on the left to its description on the right. Each correct match is worth one point.

A. Add Tutorialsdojo.com to Azure AD.
B. Provision an Azure Active Directory
C. Add the Azure AD DNS information to your domain provider
D. Verify Tutorialsdojo.com

4
	
1
	
2
	
3
A
  1. B. Provision an Azure Active Directory
  2. A. Add Tutorialsdojo.com to Azure AD.
  3. C. Add the Azure AD DNS information to your domain provider
  4. D. Verify Tutorialsdojo.com

Explanation:
Azure Active Directory (Azure AD) is Microsoft’s cloud-based identity and access management service, which helps your employees sign in and access resources in:

– External resources, such as Microsoft Office 365, the Azure portal, and thousands of other SaaS applications.

– Internal resources, such as apps on your corporate network and intranet, along with any cloud apps developed by your own organization.

Microsoft Online business services, such as Office 365 or Microsoft Azure, require Azure AD for sign-in and to help with identity protection. If you subscribe to any Microsoft Online business service, you automatically get Azure AD with access to all the free features.

Every new Azure AD tenant comes with an initial domain name, <domainname>.onmicrosoft.com. You can’t change or delete the initial domain name, but you can add your organization’s names. Adding custom domain names helps you to create user names that are familiar to your users, such as azure@tutorialsdojo.com.</domainname>

You can verify your custom domain name by using the following steps in order:

  1. Provision an Azure Active Directory

– Sign in to the Azure portal for your directory, using an account with the Owner role for the subscription. The person who creates the tenant is automatically the Global administrator for that tenant. The Global administrator can add additional administrators to the tenant.

  1. Add Tutorialsdojo.com to Azure AD.

– After you create your directory, you can add your custom domain name. Head over to your Azure Active Directory resource and look for custom domain names and click add custom domain and enter tutorialsdojo.com as the domain name

  1. Add the Azure AD DNS information to your domain provider

– After you add your custom domain name to Azure AD, you must return to your domain registrar and add the Azure AD DNS information from your copied TXT file. Creating this TXT record for your domain verifies ownership of your domain name.

– Go back to your domain registrar and create a new TXT record for your domain based on your copied DNS information. Set the time to live (TTL) to 3600 seconds (60 minutes), and then save the record.

  1. Verify Tutorialsdojo.com

– After you register your custom domain name, make sure it’s valid in Azure AD. The propagation from your domain registrar to Azure AD can be instantaneous or it can take a few days, depending on your domain registrar.

– Head over to your custom domain name and click verify. After you’ve verified your custom domain name, you can delete your verification TXT or MX file.

Hence, the correct order of deployment are:

  1. Provision an Azure Active Directory
  2. Add Tutorialsdojo.com to Azure AD
  3. Add the Azure AD DNS information to your domain provider
  4. Verify Tutorialsdojo.com

References:

https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/add-custom-domain

https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-whatis

Check out this Azure Active Directory Cheat Sheet:

https://tutorialsdojo.com/azure-active-directory-azure-ad/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Your organization has an Azure AD subscription that is associated with the directory TD-Siargao.

You have been tasked to implement a conditional access policy.

The policy must require the DevOps group to use multi-factor authentication and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations.

Solution: Create a conditional access policy and enforce grant control.

Does the solution meet the goal?

A. No
B. Yes

A

B. Yes

Explanation:
Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.

With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.

There are two types of access controls in a conditional access policy:

Grant – enforces grant or block access to resources.
Session – enable limited experiences within specific cloud applications

Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations. The given solution is to enforce grant access control. If you check the image above, the grant control satisfies this requirement.

Hence, the correct answer is: Yes.

References:

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-grant

Check out this Azure Active Directory Cheat Sheet:

https://tutorialsdojo.com/azure-active-directory-azure-ad/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Your organization has an Azure AD subscription that is associated with the directory TD-Siargao.

You have been tasked to implement a conditional access policy.

The policy must require the DevOps group to use multi-factor authentication and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations.

Solution: Create a conditional access policy and enforce session control.

Does the solution meet the goal?

A. No
B. Yes

A

A. No

Explanation:
Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.

With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.

There are two types of access controls in a conditional access policy:

Grant – enforces grant or block access to resources.
Session – enable limited experiences within specific cloud applications

Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations. The given solution is to enforce session access control. If you check the image above, the session control doesn’t have options to require the use of MFA and AD joined devices.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-grant

Check out this Azure Active Directory Cheat Sheet:

https://tutorialsdojo.com/azure-active-directory-azure-ad/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Your organization has an Azure AD subscription that is associated with the directory TD-Siargao.

You have been tasked to implement a conditional access policy.

The policy must require the DevOps group to use multi-factor authentication and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations.

Solution: Go to the security option in Azure AD and configure MFA.

Does the solution meet the goal?

A. Yes
B. No

A

B. No

Explanation:
Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.

With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.

There are two types of access controls in a conditional access policy:

Grant – enforces grant or block access to resources.
Session – enable limited experiences within specific cloud applications

Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations. The given solution is to configure MFA in Azure AD security. If you check the question again, there is a line “You have been tasked to implement a conditional access policy.” This means that you must create a conditional access policy and enforce grant control. Also, configuring MFA does not enable the option to require the use of an AD joined device.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-grant

Check out this Azure Active Directory Cheat Sheet:

https://tutorialsdojo.com/azure-active-directory-azure-ad/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your company plans to implement a hybrid Azure Active Directory that will include the following users:

AZ104D-09You have been assigned to modify the Department and UsageLocation attributes of the given users.

Which attributes can you modify from Azure AD?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

  1. Department
    A. Dev2 and Dev3 Only
    B. Dev1 and Dev2 only
    C. Dev1, Dev2 and Dev3
    D. Dev1 only
  2. UsageLocation
    A. Dev2 and Dev3 only
    B. Dev1 and Dev4 only
    C. Dev1 only
    D. Dev1, Dev2, Dev3, and Dev4
A
  1. B. Dev1 and Dev2 only
  2. D. Dev1, Dev2, Dev3, and Dev4

Explanation:
Azure Active Directory (Azure AD) is a multi-tenant, cloud-based identity and access management service. By implementing hybrid Azure AD joined devices, organizations with existing Active Directory implementations can benefit from some of the functionality provided by Azure Active Directory. These devices are joined to your on-premises Active Directory and registered with Azure Active Directory.

To achieve a hybrid identity with Azure AD, one of three authentication methods can be used, depending on your scenarios. The three methods are:

Password hash synchronization (PHS)
Pass-through authentication (PTA)
Federation (AD FS)

These authentication methods also provide single-sign-on capabilities. Single-sign on automatically signs your users in when they are on their corporate devices, connected to your corporate network.

Based on the given scenario, you need to modify the Department and UsageLocation attributes from Azure Active Directory. Once you encounter this kind of scenario, the most important info to look at is the source of the user.

There are three sources:

Microsoft account
Windows Server AD
Azure AD

Keep in mind that you cannot modify the Job Info of a user using Azure AD if the source is from Windows Server AD. To update the information of users from this source, you must do it in the Windows Server AD. Lastly, since the UsageLocation is an attribute of Azure Active Directory, you can modify it for all users.

Therefore, the correct answers are:

– EmployeeID = Dev1 and Dev2 only

– UsageLocation = Dev1, Dev2, Dev3, and Dev4

References:

https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal

https://docs.microsoft.com/en-us/azure/active-directory/devices/concept-azure-ad-join-hybrid

https://docs.microsoft.com/en-us/azure/active-directory/devices/hybrid-azuread-join-plan

Check out this Azure Active Directory Cheat Sheet:

https://tutorialsdojo.com/azure-active-directory-azure-ad/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Your company created several Azure virtual machines and a file share in the subscription TD-Boracay. The VMs are all part of the same virtual network.

You have been assigned to manage the on-premises Hyper-V server replication to Azure.

To support the planned deployment, you will need to create additional resources in TD-Boracay.

Which of the following options should you create?

A. Hyper-V site
B. Azure Recovery Services Vault
C. Azure Storage Account
D. Replication Policy
E. Azure ExpressRoute
VNet Service Endpoint

A

A. Hyper-V site
B. Azure Recovery Services Vault
D. Replication Policy

Explanation:
Azure Virtual Machines is one of several types of on-demand, scalable computing resources that Azure offers. It gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks such as configuring, patching, and installing the software that runs on it.

Hyper-V is Microsoft’s hardware virtualization product. It lets you create and run a software version of a computer called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and programs. Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual machine on the same hardware at the same time.

A Recovery Services vault is a management entity that stores recovery points created over time and provides an interface to perform backup-related operations.

A replication policy defines the settings for the retention history of recovery points. The policy also defines the frequency of app-consistent snapshots.

To set up disaster recovery of on-premises Hyper-V VMs to Azure, you should complete the following steps:

Select your replication source and target – to prepare the infrastructure, you will need to create a Recovery Services vault. After you created the vault, you can now accomplish the protection goal, as shown in the image above.
Set up the source replication environment, including on-premises Site Recovery components and the target replication environment – to set up the source environment, you need to create a Hyper-V site and add to that site the Hyper-V hosts containing the VMs that you want to replicate. The target environment will be the subscription and the resource group in which the Azure VMs will be created after failover.
Create a replication policy
Enable replication for a VM

Hence, the correct answers are:

– Hyper-V site

– Azure Recovery Services Vault

– Replication Policy

Azure Storage Account is incorrect because before you can create an Azure file share, you need to create a storage account first. Instead of creating a storage account again, you should set up a Hyper-V site.

Azure ExpressRoute is incorrect because this service is simply used to establish a private connection between your on-premises data center or corporate network to your Azure cloud infrastructure. It does not have the capability to replicate the Hyper-V server to Azure.

VNet Service Endpoint is incorrect because this option will only remove public internet access to resources and allow traffic only from your virtual network. Remember that the main requirement is to replicate the Hyper-V server to Azure. Therefore, this option wouldn’t satisfy the requirement.

References:

https://docs.microsoft.com/en-us/azure/site-recovery/tutorial-prepare-azure-for-hyperv

https://docs.microsoft.com/en-nz/azure/site-recovery/hyper-v-azure-tutorial

https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview

Check out this Azure Virtual Machines Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have been assigned to manage the following Azure resources:
AZ104-D-11 IMAGEThese resources are used by the analytics, development, and operations teams.

You need to track the resource consumption and prevent the deletion of resources.

Which resources can you apply tags and locks?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

  1. Tags
    A. tdvm, tdsa, tdsub, and tdmg
    B. tdvm, tdsa
    C. tdvm, tdsa, tdsub
    D. tdvm, tdsa, and tdmg
  2. Locks
    A. tdvm, tdsa, tdsub, and tdmg
    B. tdvm, tdsa, tdsub
    C. tdvm, tdsa, and tdmg
    D. tdvm, tdsa
A
  1. C. tdvm, tdsa, tdsub
  2. B. tdvm, tdsa, tdsub

Explanation:
Tags are used to logically organize your Azure resources, resource groups, and subscriptions into a taxonomy. Each tag consists of a name and a value pair. For example, you can apply the name Environment and the value Production to all the resources in production. You can also use tags to categorize costs by runtime environment, such as the billing usage for VMs running in the production environment.

While locks are used to prevent other users in your organization from accidentally deleting or modifying critical resources. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent.

The lock level can be set in two ways:

CanNotDelete means authorized users can still read and modify a resource, but they can’t delete the resource.
ReadOnly means authorized users can read a resource, but they can’t delete or update the resource.

Going back to the question, the analytics, developments, and operations teams are able to use the resources given from the table. Your task is to identify which resources can you apply tags and locks. As we’ve read earlier about the usage of tags and locks, the only resource that we cannot apply a tag and lock is the Management Group. The Azure management groups are containers that helps you manage access, policy, and compliance across multiple subscriptions.

Therefore, the correct answers are:

– Tags = tdvm, tdsa, and tdsub

– Locks = tdvm, tdsa, and tdsub

References:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/tag-resources

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources

Check out these Azure Cheat Sheets:

https://tutorialsdojo.com/microsoft-azure-cheat-sheets/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company has five branch offices and an Azure Active Directory to centrally manage all identities and application access.

You have been tasked with granting permission to local administrators to manage users and groups within their scope.

What should you do?

A. Assign an Azure AD role.
B. Create an administrative unit.
C. Assign an Azure role.
D. Create management groups.

A

B. Create an administrative unit.

Explanation:
Azure Active Directory (Azure AD) enterprise identity service provides single sign-on, multifactor authentication, and conditional access to guard against 99.9 percent of cybersecurity attacks. An administrative unit is an Azure AD resource that can be a container for other Azure AD resources. Take note that it can only contain users and groups. Also, in order to assign roles at resource scope, you need to have Azure AD Premium P1 or P2 licenses.

For more granular administrative control in Azure Active Directory (Azure AD), you can assign an Azure AD role with a scope limited to one or more administrative units.

Administrative units limit a role’s permissions to any portion of your organization that you define. You could, for example, use administrative units to delegate the Helpdesk Administrator role to regional support specialists, allowing them to manage users only in the region for which they are responsible.

Hence, the correct answer is: Create an administrative unit.

The option that says: Assign an Azure AD role is incorrect because if you assign an administrative role to a user that is not a member of an administrative unit, the scope of this role is within the directory.

The option that says: Create a management group is incorrect because this is just a container to organize your resources and subscriptions. This option won’t help you grant permission to local administrators to manage users and groups.

The option that says: Assign an Azure role is incorrect because the requirement is to grant local administrators permission only in their respective offices. If you use an Azure role, the user will be able to manage other Azure resources. Therefore, you need to use administrative units so the administrators can only manage users in the region that they support.

References:

https://docs.microsoft.com/en-us/azure/active-directory/roles/admin-units-assign-roles

https://docs.microsoft.com/en-us/azure/active-directory/roles/administrative-units

Check out this Azure Active Directory Cheat Sheet:

https://tutorialsdojo.com/azure-active-directory-azure-ad/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your company has a web app hosted in Azure Virtual Machine.

You plan to create a backup of TD-VM1 but the backup pre-checks displayed a warning state.

What could be the reason?

A. The Recovery Services vault lock type is read-only.
B. The status of TD-VM1 is deallocated.
C. The TD-VM1 data disk is unattached.
D. The latest VM Agent is not installed in TD-VM1

A

D. The latest VM Agent is not installed in TD-VM1

Explanation:
Azure Virtual Machine is an image service instance that provides on-demand and scalable computing resources with usage-based pricing. More broadly, a virtual machine behaves like a server: it is a computer within a computer that provides the user the same experience they would have on the host operating system itself. To protect your data, you can use Azure Backup to create recovery points that can be stored in geo-redundant recovery vaults.

A Recovery Services vault is a management entity that stores recovery points created over time and provides an interface to perform backup-related operations. These operations include taking on-demand backups, performing restores, and creating backup policies.

Backup Pre-Checks, as the name implies, check the configuration of your VMs for issues that may affect backups and aggregate this information so that you can view it directly from the Recovery Services Vault dashboard. It also provides recommendations for corrective measures to ensure successful file-consistent or application-consistent backups, wherever applicable.

Backup Pre-Checks are performed as part of your Azure VMs’ scheduled backup operations and result in one of the following states:

Passed: This state indicates that your VMs configuration is conducive for successful backups and no corrective action needs to be taken.
Warning: This state indicates one or more issues in VM’s configuration that might lead to backup failures and provides recommended steps to ensure successful backups. Not having the latest VM Agent installed, for example, can cause backups to fail intermittently and falls in this class of issues.
Critical: This state indicates one or more critical issues in the VM’s configuration that will lead to backup failures and provides required steps to ensure successful backups. A network issue caused due to an update to the NSG rules of a VM, for example, will fail backups as it prevents the VM from communicating with the Azure Backup service and falls in this class of issues.

As stated above, the reason why backup pre-checks displayed a warning state is because of the VM agent. The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. As new VMs are deployed to Azure, they receive the latest VM agent at VM provision time.

If you have installed the agent manually or are deploying custom VM images you will need to manually update to include the new VM agent at image creation time. To check for the Azure VM Agent in your machine, open Task Manager and look for a process name WindowsAzureGuestAgent.exe.

Hence, the correct answer is: The latest VM Agent is not installed in TD-VM1.

The option that says: The Recovery Services vault lock type is read-only is incorrect because you can’t create a backup if the configured lock type is read-only. If you attempted to backup a virtual machine with a resource lock, the operation won’t be performed, and notify you to remove the lock first.

The option that says: The TD-VM1 data disk is unattached is incorrect because you don’t need to attach a data disk to the virtual machine when creating a backup. To enable VM backup, you need to have a VM agent and Recovery Services vault.

The option that says: The status of TD-VM1 is deallocated is incorrect because you can still create a backup even if the status of your virtual machine is stopped (deallocated).

References:

https://docs.microsoft.com/en-us/azure/backup/backup-azure-arm-vms-prepare

https://azure.microsoft.com/en-us/blog/azure-vm-backup-pre-checks

https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/backup/backup-azure-manage-windows-server.md

Check out this Azure Virtual Machine Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Your organization has two web applications running in different environments:
az104-D-14You have been tasked to monitor the performance of the applications using Azure Application Insights.

The operation should have minimal changes to the code.

What should you do?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

  1. TDWebApp1
    A. Install the Applications Insight Agent
    B. Install the Windows Azure VM Agent
    C. Install the Applications Agents SDK
    D. Install the Azure Monitor Agent
  2. TDWebApp2
    A. Install the Applications Insight Agent
    B. Install the Azure Monitor Agent
    C. Install the Applications Agents SDK
    D. Install the Windows Azure VM Agent
A
  1. A. Install the Applications Insight Agent
  2. A. Install the Applications Insight Agent

Explanation:
Application Insights is a feature of Azure Monitor that provides extensible application performance management (APM) and monitoring for live web apps. It also supports a wide variety of platforms, including .NET, Node.js, Java, Python and works for apps hosted on-premises, hybrid, or on any public cloud.

There are two ways to enable application monitoring for hosted applications:

  1. Agent-based application monitoring (Application Insights Agent)

– This method is the easiest to enable, you only need to install the Application Insights Agent, and code changes or advanced configurations are not required.

  1. Manually instrumenting the application through code (Application Insights SDK)

– The alternative approach is you need to install the Application Insights SDK. This means that you have to manage the updates to the latest version of the packages by yourself. The second method is recommended if you need to make custom API calls to track events/dependencies not captured by default with agent-based monitoring.

The main requirement in the scenario is to use Azure Application Insights to track the performance of the applications. But the condition is to implement it with minimal changes in the code. That is why the first approach satisfies the requirement since you only need to install the agent in the machine.

Therefore, the correct answers are:

– TDWebApp1= Install the Application Insights Agent

– TDWebApp2 = Install the Application Insights Agent

The option that says: Install the Application Insights SDK is incorrect because, in order to implement this method, you will need to do some changes in the application code. Take note that the requirement is to implement monitoring with minor changes in the code.

The option that says: Install the Windows Azure VM Agent is incorrect because this won’t help you track the performance of the application. The VM agent is commonly used when you need to create a backup of the virtual machine. Therefore, this option is incorrect and won’t satisfy the requirement in the scenario.

The option that says: Install the Azure Monitor Agent is incorrect because it is already indicated in the scenario that you need to use Azure Application Insights to track the performance of the application. Also, the Azure Application Insights is a feature of Azure Monitor. Hence, this method is incorrect and will not meet the given requirement in the scenario.

References:

https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview

https://docs.microsoft.com/en-us/azure/azure-monitor/app/azure-web-apps

https://docs.microsoft.com/en-us/azure/azure-monitor/app/status-monitor-v2-overview

Check out these Azure Cheat Sheets:

https://tutorialsdojo.com/microsoft-azure-cheat-sheets/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Your company eCommerce website is deployed in an Azure virtual machine named TD-BGC.

You created a backup of the TD-BGC and implemented the following changes:

– Change the local admin password.

– Create and attach a new disk.

– Resize the virtual machine.

– Copy the log reports to the data disk.

You received an email that the admin restore the TD-BGC using the replace existing configuration.

Which of the following options should you perform to bring back the changes in TD-BGC?

A. Resize the virtual machine.
B. Create and attach a new disk.
C. Change the local admin password.
D. Copy the log reports to the data disk.

A

D. Copy the log reports to the data disk.

Explanation:
Azure Backup is a cost-effective, secure, one-click backup solution that’s scalable based on your backup storage needs. The centralized management interface makes it easy to define backup policies and protect a wide range of enterprise workloads, including Azure Virtual Machines, SQL and SAP databases, and Azure file shares.

Azure Backup provides several ways to restore a VM:

Create a new VM – quickly creates and gets a basic VM up and running from a restore point.
Restore disk – restores a VM disk, which can then be used to create a new VM.
Replace existing – restore a disk, and use it to replace a disk on the existing VM.
Cross-Region (secondary region) – restore Azure VMs in the secondary region, which is an Azure paired region.

The restore configuration that is given in the scenario is the replace existing option. Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. The existing disks connected to the VM are replaced with the selected restore point.

The snapshot is copied to the vault, and retained in accordance with the retention policy. After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren’t needed.

Since you restore the VM using the backup data, the new disk won’t have a copy of the log reports. To bring back the changes in the TD-BGC virtual machine, you will need to copy the log reports to the disk.

Hence, the correct answer is: Copy the log reports to the data disk.

The option that says: Change the local admin password is incorrect because the new password will not be overridden with the old password using the restore VM option. Therefore, you can use the updated password to connect via RDP to the machine.

The option that says: Create and attach a new disk is incorrect because the new disk does not contain the log reports. Instead of creating a new disk, you should attach the existing data disk that contains the log reports.

The option that says: Resize the virtual machine is incorrect because the only changes that will retain after rolling back are the VM size and the account password.

References:

https://docs.microsoft.com/en-us/azure/backup/backup-azure-arm-restore-vms

https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-first-look-arm

Check out these Azure Cheat Sheets:

https://tutorialsdojo.com/microsoft-azure-cheat-sheets/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Your company plans to store media assets in two Azure regions.

You are given the following requirements:

Media assets must be stored in multiple availability zones

Media assets must be stored in multiple regions

Media assets must be readable in the primary and secondary regions.

Which of the following data redundancy options should you recommend?

A. Locally redundant storage
B. Zone-redundant storage
C. Read-access geo-redundant storage
D. Geo-redundant storage

A

C. Read-access geo-redundant storage

Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.

Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:

Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability.
Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability.
Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region.
Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.

Take note, one of the requirements states that you need the media assets must be readable in the primary and secondary regions. With Geo-redundant storage, your media assets are stored in multiple availability zones and multiple regions. But read access will only be available in the secondary region if you or Microsoft initiates a failover from the primary region to the secondary region.

In order to have read access in the primary and secondary region at all times without having the need to initiate a failover, you need to recommend Read-access geo-redundant storage.

Hence, the correct answer is: Read-access geo-redundant storage.

Locally redundant storage is incorrect because the media assets will only be stored in one physical location.

Zone-redundant storage is incorrect. It only satisfies one requirement which is to store the media assets in multiple availability zones. You still need to store your media assets in multiple regions which ZRS is unable to do.

Geo-redundant storage is incorrect because the requirement states that you need read access to the primary and secondary regions. With GRS, the data in the secondary region isn’t available for read access. You can only have read access in the secondary region if a failover from the primary region to the secondary region is initiated by you or Microsoft.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview

https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy

Check out this Azure Storage Overview Cheat Sheet:

https://tutorialsdojo.com/azure-storage-overview/

Locally Redundant Storage (LRS) vs Zone-Redundant Storage (ZRS) vs Geo-redundant storage (GRS):

https://tutorialsdojo.com/locally-redundant-storage-lrs-vs-zone-redundant-storage-zrs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Tutorials Dojo has a subscription named TDSub1 that contains the following resources:

AZ104-D-17 image

TDVM1 needs to connect to a newly created virtual network named TDNET1 that is located in Japan West.

What should you do to connect TDVM1 to TDNET1?

Solution: You create a network interface in TD1 in the South East Asia region.

Does this meet the goal?

A. Yes
B. No

A

B. No

Explanation:
A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. When creating a virtual machine using the Azure portal, the portal creates one network interface with default settings for you.

You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface.

Remember these conditions and restrictions when it comes to network interfaces:

– A virtual machine can have multiple network interfaces attached but a network interface can only be attached to a single virtual machine.

– The network interface must be located in the same region and subscription as the virtual machine that it will be attached to.

– When you delete a virtual machine, the network interface attached to it will not be deleted.

– In order to detach a network interface from a virtual machine, you must shut down the virtual machine first.

– By default, the first network interface attached to a VM is the primary network interface. All other network interfaces in the VM are secondary network interfaces.

The solution proposed in the question is incorrect because the virtual network is not located in the same region as TDVM1. Take note that a virtual machine, virtual network and network interface must be in the same region or location.

You need to first redeploy TDVM1 from South East Asia to Japan West region and then create and attach the network interface in to TDVM1 in the Japan West region.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-network-interface

Check out this Azure Virtual Machine Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Tutorials Dojo has a subscription named TDSub1 that contains the following resources:

AZ104-D-17 image

TDVM1 needs to connect to a newly created virtual network named TDNET1 that is located in Japan West.

What should you do to connect TDVM1 to TDNET1?

Solution: You redeploy TDVM1 to the Japan West region and create a network interface in TD2 in the Japan West region.

Does this meet the goal?

A

Note: This item is part of a series of questions with the exact same scenario but with a different proposed answer. Each one in the series has a unique solution that may, or may not, comply with the requirements specified in the scenario.

Tutorials Dojo has a subscription named TDSub1 that contains the following resources:

AZ104-D-17 image

TDVM1 needs to connect to a newly created virtual network named TDNET1 that is located in Japan West.

What should you do to connect TDVM1 to TDNET1?

Solution: You redeploy TDVM1 to the Japan West region and create a network interface in TD2 in the Japan West region.

Does this meet the goal?

Yes
No

B. No

A

A. Yes
B. No

Explanation:
A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. When creating a virtual machine using the Azure portal, the portal creates one network interface with default settings for you.

You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface.

Remember these conditions and restrictions when it comes to network interfaces:

– A virtual machine can have multiple network interfaces attached but a network interface can only be attached to a single virtual machine.

– The network interface must be located in the same region and subscription as the virtual machine that it will be attached to.

– When you delete a virtual machine, the network interface attached to it will not be deleted.

– In order to detach a network interface from a virtual machine, you must shut down the virtual machine first.

– By default, the first network interface attached to a VM is the primary network interface. All other network interfaces in the VM are secondary network interfaces.

Take note that resources inside a resource group can be of different regions. A resource group is only a logical grouping of resources so it does not matter if a resource group is located in a different region.

Each NIC attached to a VM must exist in the same location and subscription as the VM. Each NIC must be connected to a VNet that exists in the same Azure location and subscription as the NIC. You can’t change the virtual network.

Therefore, You will need to redeploy TDVM1 to the Japan West region and create and attach a network interface in the Japan West Region.

Hence, the correct answer is: Yes.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-network-interface

Check out this Azure Virtual Machine Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Tutorials Dojo has a subscription named TDSub1 that contains the following resources:

AZ104-D-17 image

TDVM1 needs to connect to a newly created virtual network named TDNET1 that is located in Japan West.

What should you do to connect TDVM1 to TDNET1?

Solution: You create a network interface in TD1 in the Japan West region.

Does this meet the goal?

A. Yes
B. No

A

B. No

Explanation:
A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. When creating a virtual machine using the Azure portal, the portal creates one network interface with default settings for you.

You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface.

Remember these conditions and restrictions when it comes to network interfaces:

– A virtual machine can have multiple network interfaces attached but a network interface can only be attached to a single virtual machine.

– The network interface must be located in the same region and subscription as the virtual machine that it will be attached to.

– When you delete a virtual machine, the network interface attached to it will not be deleted.

– In order to detach a network interface from a virtual machine, you must shut down the virtual machine first.

– By default, the first network interface attached to a VM is the primary network interface. All other network interfaces in the VM are secondary network interfaces.

Take note, each NIC attached to a VM must exist in the same location and subscription as the VM. Each NIC must be connected to a VNet that exists in the same Azure location and subscription as the NIC. You can’t change the virtual network.

Since TDVNET1 is located in a different region from TDVM1 , you will need to redeploy TDVM1 to Japan West region and then create and attach the network interface in to TDVM1 in the Japan West region.

Hence, the correct answer is: No.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-network-interface

Check out this Azure Virtual Machine Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You have an Azure subscription named Davao-Subscription1.

You have the following public load balancers deployed in Davao-Subscription1.

AZ104-D-20

You provisioned two groups of virtual machines containing 5 virtual machines each where the traffic must be load balanced to ensure the traffic are evenly distributed.

Which of the following health probes are not available for TD2?

A. HTTP
B. HTTPS
C. TCP
D. RDP

A

B. HTTPS

Explanation:
Azure Load balancer provides a higher level of availability and scale by spreading incoming requests across virtual machines (VMs). A private load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that is load balanced. Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.

Remember that although cheaper, load balancers with the basic SKU have limited features compared to a standard load balancer. Basic load balancers are only useful for testing in development environments but when it comes to production workloads, you need to upgrade your basic load balancer to standard load balancer to fully utilize the features of Azure Load Balancer.

Take note, the protocols supported by the health probes of a basic load balancer only support HTTP and TCP protocols.

Hence, the correct answer is: HTTPS.

HTTP and TCP are incorrect because these are supported protocols for health probes using basic load balancer.

RDP is incorrect because this protocol is not supported by Azure Load Balancer.

References:

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview

https://docs.microsoft.com/en-us/azure/load-balancer/skus

Check out this Azure Load Balancer Cheat Sheet:

https://tutorialsdojo.com/azure-load-balancer/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You have an Azure subscription that contains the following storage accounts:
AZ104-D-21

There is a compliance requirement where in the data in TD1 and TD2 must be available if a single availability zone in a region fails. The solution must minimize costs and administrative effort.

What should you do first?

A. Upgrade TD1 and TD2 to general-purpose v2
B. Upgrade TD1 and TD2 to zone-redundant storage
C. Upgrade TD1 to geo-redundant storage
D. Configure lifecycle policy

A

A. Upgrade TD1 and TD2 to general-purpose v2

Explanation:
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:

Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability.
Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability.
Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region.
Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.

The main requirement is that you need to ensure the data in TD1 and TD2 are available if a single availability zone fails while minimizing costs and administrative effort.

Between the redundancy options, zone-redundant storage fits the requirement of protecting your data by copying the data synchronously across three Azure availability zones. So even if a single availability zone fails, you still have two availability zones that are available.

Remember, ZRS is not a supported redundancy option under general-purpose v1. The first thing you need to do is to upgrade your storage account to general-purpose v2 and then upgrade the replication type to ZRS.

Hence, the correct answer is: Upgrade TD1 and TD2 to general-purpose v2.

The option that says: Upgrade TD1 and TD2 to zone-redundant storage is incorrect because zone-redundant storage is not supported under general-purpose v1.

The option that says: Upgrade TD1 to geo-redundant storage is incorrect because one of the requirements is to minimize cost. With ZRS, you have already satisfied the data availability requirement.

The option that says: Configure lifecycle policy is incorrect because this is simply a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy

https://docs.microsoft.com/en-us/azure/storage/common/redundancy-migration

Check out this Azure Storage Overview Cheat Sheet:

https://tutorialsdojo.com/azure-storage-overview/

Locally Redundant Storage (LRS) vs Zone-Redundant Storage (ZRS) vs Geo-Redundant Storage (GRS):

https://tutorialsdojo.com/locally-redundant-storage-lrs-vs-zone-redundant-storage-zrs/

22
Q

Your organization has a domain named tutorialsdojo.com.

You want to host your records in Microsoft Azure.

Which three actions should you perform?

A. Copy the Azure DNS A records
B. Create an Azure private DNS zone
C. Create an Azure public DNS zone
D. Update the Azure NS records to your domain registrar
E. Update the Azure A records to your domain registrar
F. Copy the Azure DNS NS records

A

C. Create an Azure public DNS zone
D. Update the Azure NS records to your domain registrar
F. Copy the Azure DNS NS records

Explanation:
Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

Using custom domain names helps you to tailor your virtual network architecture to best suit your organization’s needs. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Additionally, you can configure zone names with a split-horizon view, which allows a private and a public DNS zone to share the name.

You can use Azure DNS to host your DNS domain and manage your DNS records. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.

Since you own tutorialsdojo.com from a domain name registrar you can then create a zone with the name tutorialsdojo.com in Azure DNS. Since you’re the owner of the domain, your registrar allows you to configure the Nameserver (NS) records to your domain allowing internet users around the world are then directed to your domain in your Azure DNS zone whenever they try to resolve tutorialsdojo.com.

The steps in registering your Azure public DNS records are:

Create your Azure public DNS zone
Retrieve name servers – Azure DNS gives name servers from a pool each time a zone is created.
Delegate the domain – Once the DNS zone gets created and you have the name servers, you’ll need to update the parent domain with the Azure DNS name servers.

Hence, the correct answers are:

– Create an Azure public DNS zone

– Update the Azure NS records to your domain registrar

– Copy the Azure DNS NS records

The options that say: Copy the Azure DNS A records and Update the Azure A records to your domain registrar is incorrect because you need to copy the nameserver records instead of the A record. An A record is a type of DNS record that points a domain to an IP address.

The option that says: Create an Azure private DNS zone is incorrect because this simply manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. The requirement states that the users must be able to access tutorialsdojo.com via the internet. You need to deploy an Azure public DNS zone instead.

References:

https://docs.microsoft.com/en-us/azure/dns/dns-overview

https://docs.microsoft.com/en-us/azure/dns/dns-getstarted-portal

Check out this Azure DNS cheat sheet:

https://tutorialsdojo.com/azure-dns/

23
Q

You plan to deploy the following public IP addresses in your Azure subscription shown in the following table:
AZ104-D-23You need to associate a public IP address to a public Azure load balancer with an SKU of standard.

Which of the following IP addresses can you use?

A. TD1
B. TD3
C. TD3 and TD4
D. TD1 and TD2

A

B. TD3

Explanation:
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance Internet traffic to your VMs.

A public IP associated with a load balancer serves as an Internet-facing frontend IP configuration. The frontend is used to access resources in the backend pool. The frontend IP can be used for members of the backend pool to egress to the Internet.

Remember that the SKU of a load balancer and the SKU of a public IP address SKU must match when you use them with public IP addresses meaning if you have a load balancer with an SKU of standard, you must provision a public IP address with an SKU of standard also.

Hence, the correct answer is: TD3.

The options that say: TD1 and TD1 and TD2 are incorrect because both public IP addresses have an SKU of basic. You must provision a public IP address with a SKU of standard so you can associate it with a standard public load balancer.

The option that says: TD3 and TD4 is incorrect because you can only create a standard public IP address with an assignment of static.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/ip-services/public-ip-addresses

https://docs.microsoft.com/en-us/azure/virtual-network/ip-services/configure-public-ip-load-balancer

Check out this Azure Load Balancer Cheat Sheet:

https://tutorialsdojo.com/azure-load-balancer/

24
Q

For each of the following items, choose Yes if the statement is true or choose No if the statement is false. Take note that each correct item is worth one point.

Questions 	Yes 	No	
You can rehydrate a blob data in archive tier without costs	
	
You can access your blob data that is in archive tier	
	
You can rehydrate a blob data in archive tier instantly
A

Incorrect 0 / 3 Points

Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. The available access tiers include:

Hot – Optimized for storing data that is accessed frequently.

Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.

Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).

While a blob is in the archive access tier, it’s considered offline and can’t be read or modified. The blob metadata remains online and available, allowing you to list the blob and its properties. Reading and modifying blob data is only available with online tiers such as hot or cool.

To read data in archive storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and can take hours to complete.

A rehydration operation with Set Blob Tier is billed for data read transactions and data retrieval size. High-priority rehydration has higher operation and data retrieval costs compared to standard priority. High-priority rehydration shows up as a separate line item on your bill.

The statement that says: You can rehydrate a blob data in archive tier without costs is incorrect. You are billed for data read transactions and data retrieval size (per GB).

The statement that says: You can rehydrate a blob data in archive tier instantly is incorrect. Rehydrating a blob from the Archive tier can take several hours to complete.

The statement that says: You can access your blob data that is in archive tier is incorrect because blob data stored in the archive tier is considered to be offline and can’t be read or modified.

25
Q

You have an Azure subscription named Davao-Subscription that contains an Azure Files named Baguio-Share.

You have several Azure virtual machine that is domain joined to an on-premises Active Directory domain controller and a site-to-site VPN connection for cross-premises connectivity.

There is a requirement to replace your on-premises file server with Baguio-Share. Your domain-joined machines must be able to mount Baguio-Share using your active directory credentials.

Which four actions should you perform in sequence?

Instructions: To answer, drag the appropriate item from the column on the left to its description on the right. Each correct match is worth one point.

A. Assign share and directory permissions
B. Sync on-premises AD with Azure AD Connect
C. Enable AD DS authentication
D. Mount file share with AD credentials

3
	
4
	
1
	
2
A
  1. B. Sync on-premises AD with Azure AD Connect
  2. C. Enable AD DS authentication
  3. A. Assign share and directory permissions
  4. D. Mount file share with AD credentials

Explanation:
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files SMB file shares are accessible from Windows, Linux, and macOS clients. Azure Files NFS file shares are accessible from Linux or macOS clients. Additionally, Azure Files SMB file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.

Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares with your on-prem AD DS credentials. Further, it allows you to better manage your permissions to allow granular access control.

To enable AD DS authentication, you must do the following in sequence:

  1. Sync on-premises AD with Azure AD Connect

– Identities used for access must be synced to Azure AD. Only hybrid users that exist in both on-premises AD DS and Azure AD can be authenticated and authorized for Azure file share access.

  1. Enable AD DS authentication

– To enable AD DS authentication over SMB for Azure file shares, you need to register your storage account with AD DS and then set the required domain properties on the storage account.

– You can think of this process as if it were like creating an account representing an on-premises Windows file server in your AD DS.

  1. Assign share and directory permissions

– After enabling AD DS authentication, you must configure share-level permissions in order to get access to your file shares. The Azure RBAC share-level permissions act as a high-level gatekeeper that determines whether a user can access the share.

– With directory permissions, you can configure proper Windows ACLs at the root, directory, or file level, to take advantage of granular access control.

  1. Mount file share with AD credentials

Hence, the correct sequence is:

  1. Sync on-premises AD with Azure AD connect
  2. Enable AD DS authentication
  3. Assign share and directory permissions
  4. Mount file share with AD credentials

References:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-identity-auth-active-directory-enable

Check out this Azure Files Cheat Sheet:

https://tutorialsdojo.com/azure-file-storage/

Azure Blob vs Disk vs File Storage:

https://tutorialsdojo.com/azure-blob-vs-disk-vs-file-storage/

26
Q
A

Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.

Locking of resources overrides the permissions of the users in your organization. It is mainly used to prevent unexpected changes such as modification and deletion of critical resources. Remember that when you apply a lock at a parent scope, all resources within that scope inherit the same lock.

You can set the lock level to CanNotDelete or ReadOnly. In the Azure Portal, the locks are called Delete and Read-only respectively.

– CanNotDelete means authorized users can still read and modify a resource, but they can’t delete the resource.

– ReadOnly means authorized users can read a resource, but they can’t delete or update the resource.

A resource group is just a container for your resources. You decide which resources belong to different resource groups. Take note that if you move a resource to a different resource group, the location of the resource would not change.

The following statements are correct because you can move the TD-WebApp2 to the existing resource groups:

– You can move TD-WebApp2 to TD-RG3.

– You can move TD-WebApp2 to TD-RG5.

The statement that says: You can move TD-WebApp2 to TD-RG1 is incorrect because the lock type of the resource group is set to read-only. This means that users can only read a resource, but they can’t delete or update the resource. If you try to move TD-WebApp2 to TD-RG1, you’d receive an error message “Moving resources failed”. In order to move the web app, you must delete the read-only lock type.

References:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources

Check out this Azure App Service Cheat Sheet:

https://tutorialsdojo.com/azure-app-service/

27
Q

You deployed an Ubuntu server using an Azure virtual machine.

You need to monitor the system performance metrics and log events.

Which of the following options would you use?

A. Boot diagnostics
B. Linux Diagnostic Extension
C. Azure Performance Diagnostics VM Extension
D. Connection monitor

A

B. Linux Diagnostic Extension

Explanation:
Azure Diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. It collects guest metrics into Azure Monitor Metrics and sends guest logs and metrics to Azure storage for archiving.

Azure Performance Diagnostics VM Extension helps collect performance diagnostic data from Windows VMs. The extension performs analysis and provides a report of findings and recommendations to identify and resolve performance issues on the virtual machine.

The Linux Diagnostic Extension will help you monitor the health of a Linux VM running on Microsoft Azure. It has the following capabilities:

– Collects system performance metrics from the VM and stores them in a specific table in a designated storage account.

– Retrieves log events from syslog and stores them in a specific table in the designated storage account.

– Enables users to customize the data metrics that are collected and uploaded.

– Enables users to customize the syslog facilities and severity levels of events that are collected and uploaded.

– Enables users to upload specified log files to a designated storage table.

– Supports sending metrics and log events to arbitrary EventHub endpoints and JSON-formatted blobs in the designated storage account.

With this extension, you can now monitor the system performance metrics and log events of the virtual machine.

Hence, the correct answer is: Linux Diagnostic Extension.

Azure Performance Diagnostics VM Extension is incorrect because this extension only collects performance diagnostic data from Windows VMs.

Boot diagnostics is incorrect because this feature is primarily used to diagnose VM boot failures and not for monitoring the system performance metrics and log events.

Connection monitor is incorrect because this is simply used for end-to-end connection monitoring.

References:

https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/diagnostics-linux

https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostics-extension-overview

Check out this Azure Virtual Machines Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

28
Q

Your company Azure subscription contains the following resources:
az104-D-28

You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.

Solution: Configure a packet capture in Azure Network Watcher.

Does the solution meet the goal?

A. Yes
B. No

A

A. Yes

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.

With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.

The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.

Hence, the correct answer is: Yes.

References:

https://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-packet-capture-overview

https://learn.microsoft.com/en-us/azure/network-watcher/frequently-asked-questions

Check out this Azure Virtual Machines Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

29
Q

Your company Azure subscription contains the following resources:
az104-D-29You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.

Solution: Create a connection monitor in Azure Network Watcher.

Does the solution meet the goal?

Yes
No
A

No

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.

With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.

The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.

The solution provided is to set up a Connection Monitor in Azure Network Watcher. Connection Monitor’s primary use case is to track connectivity between your on-premises setups and the Azure VMs/virtual machine scale sets that host your cloud application. You cannot use this feature to capture packets to and from your virtual machines in a virtual network because it is not supported.

Hence, the correct answer is: No.

References:

https://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-packet-capture-overview

https://learn.microsoft.com/en-us/azure/network-watcher/frequently-asked-questions

https://learn.microsoft.com/en-us/azure/network-watcher/connection-monitor-overview

Check out this Azure Virtual Machines Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

30
Q

Your company Azure subscription contains the following resources:
az104-D-30You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.

Solution: Use IP flow verify in Azure Network Watcher.

Does the solution meet the goal?

No
Yes
A

No

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.

With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.

The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.

The provided solution is to use IP flow verify in Azure Network Watcher. The main use case of IP flow verify is to determine whether a packet to or from a virtual machine is allowed or denied based on 5-tuple information and not to capture packets from your virtual machines for a period of 3600 seconds or 1 hour.

Hence, the correct answer is: No.

References:

https://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-packet-capture-overview

https://learn.microsoft.com/en-us/azure/network-watcher/frequently-asked-questions

Check out this Azure Virtual Machines Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-machines/

31
Q

A company deployed a Grafana image in Azure Container Apps with the following configurations:

Resource Group: tdrg-grafana

Region: Canada Central

Zone Redundancy: Disabled

Virtual Network: Default

IP Restrictions: Allow

The container’s public IP address was provided to development teams in the East US region to allow users access to the dashboard. However, you received a report that users can’t access the application.

Which of the following options allows users to access Grafana with the least amount of configuration?

A. Configure ingress to generate a new endpoint.
B. Disable IP Restrictions.
C. Move the container app to the East US Region.
D. Add a custom domain and certificate.

A

A. Configure ingress to generate a new endpoint.

Explanation:
Azure Container Apps allows you to deploy containerized apps without managing complex infrastructure. You have the freedom to write code in your preferred language or framework, and create microservices that are fully supported by the Distributed Application Runtime (Dapr). The scaling of your application can be automatically adjusted based on either HTTP traffic or events, utilizing Kubernetes Event-Driven Autoscaling (KEDA).

With Azure Container Apps ingress, you can make your container application accessible to the public internet, VNET, or other container apps within your environment. This eliminates the need to create an Azure Load Balancer, public IP address, or any other Azure resources to handle incoming HTTPS requests. Each container app can have unique ingress configurations. For instance, one container app can be publicly accessible while another can only be reached within the Container Apps environment.

The problem with the given scenario is that users are accessing the public IP address even though the ingress setting is not enabled during the creation of the container app. When you configure the ingress and target port and then save it, the app will generate a new endpoint depending on the ingress traffic that you’ve selected. Now when you try to access the application URL, you will be redirected to the target port of the container image.

Hence, the correct answer is: Configure ingress to generate a new endpoint.

The option that says: Move the container app to the East US Region is incorrect because you can’t move a container app to a different Region.

The option that says: Disable IP Restrictions is incorrect because this won’t still help users access the Grafana app. Instead of denying traffic from source IPs, you only need to enable ingress and target port.

The option that says: Add a custom domain and certificate is incorrect because even though you added a custom domain name, you still won’t be able to access the application since additional configurations must be done to allow VNET-scope ingress. Therefore, the quickest way and least amount of configurations would be to enable ingress and get the application URL.

References:

https://learn.microsoft.com/en-us/azure/container-apps/ingress?tabs=bash

https://azure.microsoft.com/en-us/products/container-apps/

Check out these Azure Compute Services Cheat Sheets:

https://tutorialsdojo.com/azure-cheat-sheets-compute-services/

32
Q

A startup has an Azure subscription that contains the following resources:
AZ104-4-32 imageYou have been tasked with replicating the current state of your resources in order to automate future deployments when a new feature needs to be added to the application.

Which of the following should you do?

A. Create a VM with preset configurations.
B. Redeploy and reapply a VM.
C. Use the resource group export template.
D. Capture an image of a VM.

A

C. Use the resource group export template.

Explanation:
Azure Resource Manager (ARM) templates is a service provided by Microsoft Azure that allows you to provision, manage, and delete Azure resources using declarative syntax. These templates can be used to deploy and manage resources such as virtual machines, storage accounts, and virtual networks in a consistent and reliable manner. To deploy the template, you can use the Azure Portal, Azure CLI, or Azure PowerShell.

In this scenario, you need to use ARM export templates to replicate the current state of our resources. This means that if you need to redeploy all your resources, you can just create a reusable template instead of going all over the manual creation of resources. You can export a service or a resource group. Based on the given requirements, we just need to capture all resources, then export resource group as template.

Hence, the correct answer is: Use the resource group export template.

The option that says: Capture an image of a VM is incorrect because this just creates a snapshot of the virtual machine configurations. Take note that you need to capture the current state of all resources. Therefore, export template will help us ease the creation of resources.

The option that says: Redeploy and reapply a VM is incorrect because redeploying the VM just migrates it to a new Azure host. While reapply is used to resolve the issues of a stuck failed state of the VM. Both features does not help us capture the current state of our resources.

The option that says: Create a VM with preset configurations is incorrect because this only helps you choose a VM based on your workload type and environment.

References:

https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/export-template-portal

https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/overview

Check out this Azure Resource Manager (ARM) Cheat Sheet:

https://tutorialsdojo.com/azure-resource-manager-arm/

33
Q

You have the following storage accounts in your Azure subscription.
az104-4-33There is a requirement to export the data from your subscription using the Azure Import/Export service

Which Azure Storage account can you use to export the data?

A. mystorage1
B. mystorage2
C. mystorage3
D. mystorage4

A

B. mystorage2

Explanation:
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.

Consider using Azure Import/Export service when uploading or downloading data over the network is too slow, or getting additional network bandwidth is cost-prohibitive. Use this service in the following scenarios:

Data migration to the cloud: Move large amounts of data to Azure quickly and cost-effectively.

Content distribution: Quickly send data to your customer sites.

Backup: Take backups of your on-premises data to store in Azure Storage.

Data recovery: Recover a large amount of data stored in the storage and have it delivered to your on-premises location.

Azure Import/Export service allows data transfer into Azure Blobs and Azure Files by creating jobs. Use the Azure portal or Azure Resource Manager REST API to create jobs. Each job is associated with a single storage account. This service only supports export of Azure Blobs. Export of Azure files is not supported.

The jobs can be import or export jobs. An import job allows you to import data into Azure Blobs or Azure files, whereas the export job allows data to be exported from Azure Blobs. For an import job, you ship drives containing your data. When you create an export job, you ship empty drives to an Azure datacenter. In each case, you can ship up to 10 disk drives per job.

Hence, the correct answer is: mystorage2.

mystorage1 is incorrect because an export job does not support Azure Files. The Azure Import/Export service only supports export of Azure Blobs.

mystorage3 and mystorage4 are incorrect because the Queue and Table storage services are simply not supported by the Azure Import/Export service.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-service

https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-requirements

Check out this Azure Storage Overview Cheat Sheet:

https://tutorialsdojo.com/azure-storage-overview/

Azure Blob vs. Disk vs. File Storage:

https://tutorialsdojo.com/azure-blob-vs-disk-vs-file-storage/

34
Q

You have an on-premises data center that contains a file server named TDFileServer1 which has 20 TB of data.

You created an Azure subscription and an Azure file share named TDFile1.

There is a requirement to transfer 20 TB of data to TDFile1 using the Azure Import/Export service.

In which order should you perform the actions?

Instructions: To answer, drag the appropriate item from the column on the left to its description on the right. Each correct match is worth one point.

A. You create an import job in the Azure portal.
B. You prepare the external disks by attaching it to TDFileServer1 and running the WAImportExport.exe tool.
C. You ship the external disks to the Azure Datacenter.
D. You update the import job in the Azure portal.

1
	
3
	
2
	
4
A
  1. B. You prepare the external disks by attaching it to TDFileServer1 and running the WAImportExport.exe tool.
  2. A. You create an import job in the Azure portal.
  3. C. You ship the external disks to the Azure Datacenter.
  4. D. You update the import job in the Azure portal.

Explanation:
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.

Consider using Azure Import/Export service when uploading or downloading data over the network is too slow, or getting additional network bandwidth is cost-prohibitive. Use this service in the following scenarios:

– Data migration to the cloud: Move large amounts of data to Azure quickly and cost effectively.

– Content distribution: Quickly send data to your customer sites.

– Backup: Take backups of your on-premises data to store in Azure Storage.

– Data recovery: Recover large amounts of data stored in storage and have it delivered to your on-premises location.

To import data, the service requires you to ship supported disk drives containing your data to an Azure datacenter.

Microsoft Azure WAImportExport.exe tool is the drive preparation and repair tool that you can use with the Microsoft Azure Import/Export Service. This tool can be used in several different ways:

– Before you create an Import job, you can use this tool to copy data to the hard drives you are going to ship to a Microsoft Azure data center.

– After an import job has finished, you can use this tool to repair any blobs that were corrupted, missing, or conflicted with other blobs.

– After you receive the drives from an export job, you can use this tool to repair any files that were corrupted or missing on the drives.

The journal file stores basic information such as drive serial number, encryption key, and storage account details.

You can import the contents of FileServer1 using the following steps in order:

  1. Prepare the drives and run the WAImportExport.exe tool.

– Attach the external disk to FileServer1 and run WAImportExport.exe. Each time you run the WAImportExport tool to copy files to the hard drive, the tool creates a copy session. The state of the copy session is written to the journal file.

  1. You create an import job in the Azure portal.

– You must specify the following for an import job: name of the import job, type of job (import from azure or export from azure) subscription, resource group, journal file, the storage account for import destination, and the return shipping info.

  1. You ship the external disks to the Azure Datacenter.

– FedEx, UPS, or DHL can be used to ship the package to Azure datacenter. You must ensure that you properly package your disks to avoid potential damage and delays in processing.

  1. You update the import job in the Azure portal.

– You need to update job status and tracking info once drives are shipped and mark the checkbox against Mark as Shipped. You then provide the carrier and tracking number. If the tracking number is not updated within 2 weeks of creating the job, the job expires.

Hence, the correct order of deployment are:

  1. You prepare the external disks by attaching it to FileServer1 and run the WAImportExport.exe tool.
  2. You create an import job in the Azure portal.
  3. You ship the external disks to the Azure Datacenter.
  4. You update the import job in the Azure portal.

References:

https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-service

https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-data-to-files

Check out this Azure Files Cheat Sheet:

https://tutorialsdojo.com/azure-file-storage/

35
Q

Your company has an Azure subscription that contains a storage account named tdstorageaccount1 and a virtual network named TDVNET1 with an address space of 192.168.0.0/16.

You have a user that needs to connect to the storage account from her workstation which has a public IP address of 131.107.1.23.

You need to ensure that the user is the only one who can access tdstorageaccount1.

Which two actions should you perform? Each correct answer presents part of the solution.

A. From the networking settings, enable TDVnet1 under Firewalls and virtual networks.
B. Set the Allow access from field to Selected networks under the Firewalls and virtual networks blade of tdstorageaccount1.
C. From the networking settings, select service endpoint under Firewalls and virtual networks.
D. From the networking settings, select “Allow trusted Microsoft services to access this storage account” under Firewalls and virtual networks.
E. Add the 131.107.1.23 IP address under Firewalls and virtual networks blade of tdstorageaccount1.

A

B. Set the Allow access from field to Selected networks under the Firewalls and virtual networks blade of tdstorageaccount1.
E. Add the 131.107.1.23 IP address under Firewalls and virtual networks blade of tdstorageaccount1.

Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.

To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including Internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific VNets. You can also configure rules to grant access to traffic from selected public Internet IP address ranges, enabling connections from specific Internet or on-premises clients. This configuration enables you to build a secure network boundary for your applications.

To whitelist a public IP address, you must:

  1. Go to the storage account you want to secure.
  2. Select on the settings menu called Networking.
  3. Under Firewalls and virtual networks, select Selected networks.
  4. Under firewall, add the public IP address then save.

Hence, the following statements are correct:

– Set the Allow access from field to Selected networks under the Firewalls and virtual networks blade of tdstorageaccount1.

– Add the 131.107.1.23 IP address under Firewalls and virtual networks blade of tdstorageaccount1.

The statement that says: From the networking settings, add TDVnet1 under Firewalls and virtual networks is incorrect because adding TDVnet1 will not allow the user to connect to tdstorageaccount1. The requirement states that the workstation of the user must have access to tdstorageaccount1. The TDVnet1 virtual network doesn’t share the same network setting with tdstorageaccount1.

The statement that says: From the networking settings, select service endpoint under Firewalls and virtual networks is incorrect because it only allows you to create network rules that allow traffic only from selected VNets and subnets, which creates a secure network boundary for their data. Service endpoints only extend your VNet private address space and identity to the Azure services, over a direct connection.

The statement that says: From the networking settings, select Allow trusted Microsoft services to access this storage account under Firewalls and virtual networks is incorrect because this simply grants a subset of trusted Azure services access to the storage account, while maintaining network rules for other apps. These trusted services will then use strong authentication to securely connect to your storage account but won’t restrict access to a particular subnetwork or IP address.

References:

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-overview

https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security

Check out this Azure Storage Overview Cheat Sheet:

https://tutorialsdojo.com/azure-storage-overview/

36
Q

Your company is currently hosting a mission-critical application in an Azure virtual machine that resides in a virtual network named TDVnet1. You plan to use Azure ExpressRoute to allow the web applications to connect to the on-premises network.

Due to compliance requirements, you need to ensure that in the event your ExpressRoute fails, the connectivity between TDVnet1 and your on-premises network will remain available.

The solution must utilize a site-to-site VPN between TDVnet1 and the on-premises network. The solution should also be cost-effective.

Which three actions should you implement? Each correct answer presents part of the solution.

A. Configure a VPN gateway with VpnGw1 as its SKU.
B. Configure a gateway subnet.
C. Configure a local network gateway.
D. Configure a VPN gateway with Basic as its SKU.
E. Configure a connection.

A

A. Configure a VPN gateway with VpnGw1 as its SKU.
C. Configure a local network gateway.
E. Configure a connection.

Explanation:
A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.

A site-to-site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it.

Configuring Site-to-Site VPN and ExpressRoute coexisting connections has several advantages:

– You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute.

– Alternatively, you can use Site-to-Site VPNs to connect to sites that are not connected through ExpressRoute.

To create a site-to-site connection, you need to do the following:

– Provision a virtual network

– Provision a VPN gateway

– Provision a local network gateway

– Provision a VPN connection

– Verify the connection

– Connect to a virtual machine

Take note that since you have already deployed an ExpressRoute, you do not need to create a virtual network and gateway subnet as these are prerequisites in creating an ExpressRoute.

Hence, the correct answers are:

– Configure a VPN gateway with a VpnGw1 SKU.

– Configure a local network gateway.

– Configure a connection.

The option that says: Configure a gateway subnet is incorrect. As you already have an ExpressRoute connecting to your on-premises network, this means that a gateway subnet is already provisioned.

The option that says: Configure a VPN gateway with Basic as its SKU is incorrect. Although one of the requirements is to minimize costs, the coexisting connection for ExpressRoute and site-to-site VPN connection does not support a Basic SKU. The bare minimum for a coexisting connection is VpnGw1.

References:

https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-portal

https://docs.microsoft.com/en-us/azure/expressroute/expressroute-howto-coexist-resource-manager

https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-vpngateways#gwsku

Check out this Azure VPN Gateway Cheat Sheet:

https://tutorialsdojo.com/azure-vpn-gateway/

37
Q

You have an Azure subscription that has a virtual network named TDVNet1 that contains 2 subnets: TDSubnet1 and TDSubnet2.

You have two virtual machines shown in the following table:

az104-4-37

TD1 and TD2 both use a public IP address and you allow inbound Remote Desktop connections from the Windows Server 2019.

Your subscription has two network security groups (NSGs) named TDSG1 and TDSG2.

TDSG1 is associated with TDSubnet1 and only uses the default rules.

TDSG2 is associated with the network interface of TD2. It uses the default rules and the following custom incoming rule:

Priority: 100

Name: RDP

Port: 3389

Protocol: TCP

Source: Any

Destination: Any

Action: Allow

For each of the following items, choose Yes if the statement is true or choose No if the statement is false. Take note that each correct item is worth one point.

Questions 	Yes 	No	
You can connect to TD2 using Remote Desktop from the internet.	
	
You can connect to TD2 from TD1 using Remote Desktop.	
	
You can connect to TD1 using Remote Desktop from the internet.
A

Azure Network Security Group is used to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.

In the image above, there is an inbound rule named RDP that allows Remote Desktop access (3389) from the Internet. Since TDSG2 is associated with the network interface of TD2, it will allow RDP access from the Internet. From a security standpoint, RDP access should not be exposed to the Internet and the best practice is to use whitelisting of specific IP addresses to ensure that only traffic coming in from your workstation can connect to the server.

Take note that you do not need to configure additional inbound rules when you want to connect to TD2 from TD1. This is because the default rules of a network security group always allow traffic that is coming from within the virtual network where both virtual machines reside in as well as all connected on-premises address spaces and connected Azure VNets (local networks).

Therefore, the following statements are correct:

– You can connect to TD2 using Remote Desktop from the internet.

– You can connect to TD2 from TD1 using Remote Desktop.

The statement that says: You can connect to TD1 using Remote Desktop from the internet is incorrect. Since TDSG1 only uses the default rules, it will not accept any incoming traffic coming from the Internet. Take note that default rules cannot be deleted, but because they are assigned the lowest priority, they can be overridden by the rules that you create.

References:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview

Check out this Azure Virtual Network Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-network-vnet/

38
Q

You have an Azure subscription named TD-Subscription1 that contains a load balancer that distributes traffic between 10 virtual machines using port 443.

There is a requirement wherein all traffic from Remote Desktop Protocol (RDP) connections must be forwarded to VM1 only.

What should you do to satisfy the requirement?

A. Create a load balancing rule.
B. Create a new load balancer for VM1.
C. Create a health probe.
D. Create an inbound NAT rule.

A

D. Create an inbound NAT rule.

Explanation:
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance Internet traffic to your VMs.

You need to create an inbound NAT rule to forward traffic from a specific port of the front-end IP address to a specific port of a back-end VM. The traffic is then sent to a specific virtual machine.

Take note that you can only have one virtual machine as the target virtual machine. A network security group (NSG) must be associated to VM1 with inbound rules explicitly allowing traffic from port 3389 and your IP address.

Hence, the correct answer is: Create an inbound NAT rule.

The option that says: Create a new load balancer for VM1 is incorrect because you do not need to create a new load balancer as you can simply use port forwarding or inbound NAT rule to forward RDP traffic to VM1.

The option that says: Create a load balancing rule is incorrect because this component only defines how incoming traffic is distributed to all the instances within the backend pool. Furthermore, it is mentioned in the scenario that you should direct RDP traffic to VM1 only.

The option that says: Create a health probe is incorrect because it is just used to determine the health status of the instances in the backend pool. This health probe will determine if an instance is healthy and can receive traffic.

References:

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview

https://docs.microsoft.com/en-us/azure/load-balancer/tutorial-load-balancer-port-forwarding-portal

Check out this Azure Load Balancer Cheat Sheet:

https://tutorialsdojo.com/azure-load-balancer/

39
Q

Your organization has a standard general-purpose v2 storage account with an access tier of Hot. The files uploaded to the storage account are infrequently accessed by your colleagues.

You were tasked with modifying the storage account with the following requirements:

Inactive data must automatically transition to the archive tier after 120 days.

Data uploaded must be accessed instantly, provided that it has not been transitioned to the archive tier yet.

Minimize costs.

Minimize administrative effort.

Which two actions should you perform? Choose two.

A. Create an Azure Function to move the inactive data to the archive tier after 120 days of inactivity.
B. Create a lifecycle management rule to move the inactive data to the Archive tier after 120 days of inactivity.
C. Manually copy the inactive data using the Copy Blob operation to the archive tier after 120 days of inactivity.
D. Set the default access tier of the storage account to the Cool tier.
E. Set the default access tier of the storage account to the Archive tier.
F. Automatically archive data on upload.

A

B. Create a lifecycle management rule to move the inactive data to the Archive tier after 120 days of inactivity.
D. Set the default access tier of the storage account to the Cool tier.

Explanation:
An Azure Storage Account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.

Data sets have unique lifecycles. Early in the lifecycle, people access some data often. But the need for access often drops drastically as the data ages. Some data remains idle in the cloud and is rarely accessed once stored.

Some data sets expire days or months after creation, while other data sets are actively read and modified throughout their lifetimes. Azure Storage lifecycle management offers a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle.

Storage accounts have a default access tier setting that indicates in which online tier a new blob is created. The default access tier setting can be either hot or cool only. The behavior of this setting is slightly different depending on the type of storage account:

Since the scenario states that your colleagues infrequently access the data, this means that you do not need to store your data in the Hot tier. You can park the data in the Cool tier and automatically transition it to Archive using the data lifecycle.

Hence, the correct answers are:

– Create a lifecycle management rule to move the inactive data to the Archive tier after 120 days of inactivity.

– Set the default access tier of the storage account to the Cool tier.

The statement that says: Create an Azure Function to move the inactive data to the archive tier after 120 days of inactivity is incorrect because you can achieve the same goal using lifecycle management. Remember that one of the requirements is minimizing administrative effort.

The statement that says: Set the default access tier of the storage account to the Archive tier is incorrect because the supported default access tiers for storage accounts are Hot and Cool tiers. What you can do is move the data to the cool tier if your data is infrequently accessed and then create a lifecycle policy to transition unmodified data to the Archive tier after a set amount of time.

The statement that says: Manually copy the inactive data using the Copy Blob operation to the archive tier after 120 days of inactivity is incorrect because manually copying the inactive data to the Archive tier is a tedious task if you have thousands of data. One of the requirements states that you must lessen the administrative effort. Use lifecycle management instead.

The statement that says: Automatically archive data on upload is incorrect one of the requirements states that you need your data that is not in the archive tier to be accessible instantly. Data in the Archive tier takes hours before you can access it.

References:

https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview

https://docs.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview

Check out this Azure Storage Overview Cheat Sheet:

https://tutorialsdojo.com/azure-storage-overview/

40
Q

You have an Azure storage named tutorialsdojo for hosting static assets. You have a user named Manila that uploads images to a blob container in tutorialsdojo.

You need to ensure that you satisfy the following requirements:

Requirement 1: Only allow access from the specific public internet IP address of Manila.

Requirement 2: Data accidentally deleted must be recoverable 14 days after deletion.
az104-4-40

Which two storage account features should you use to satisfy requirements?

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

  1. Requirement 1
    A. Networking
    B. Redundancy
    C. Access Control
    D. Data Protection
  2. Requirement 2
    A. Redundancy
    B. Data Protection
    C. Object Replication
    D. Object Replication
A
  1. Requirement 1 - A. Networking
  2. Requirement 2 - B. Data Protection

Explanation:
An Azure Storage Account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.

Under the networking tab, you can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based services and on-premises networks and blocks general internet traffic.

You can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based services and on-premises networks and blocks general internet traffic.

Under the data protection tab, soft delete protects your data from being accidentally or erroneously modified or deleted. When container soft delete is enabled for a storage account, a container, and its contents may be recovered after it has been deleted within a retention period that you specify.

Therefore, the correct answers are:

– Requirement 1 = Networking

– Requirement 2 = Data protection

Access Control is incorrect because Manila user can already upload images to the tutorialsdojo storage account. Access control or role-based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.

Redundancy is incorrect because you only need to protect data that have been accidentally deleted. Redundancy copies your data so that it is protected from transient hardware failures, network or power outages, and natural disasters.

Object replication is incorrect because this simply copies blobs asynchronously from a source storage account to a destination account. You only need to implement soft delete to satisfy the requirement.

Lifecycle management is incorrect because this just allows you to transition your data to the appropriate access tiers or expire at the end of the data’s lifecycle.

References:

https://learn.microsoft.com/en-us/azure/storage/common/storage-network-security

https://learn.microsoft.com/en-us/azure/storage/blobs/soft-delete-container-enable

Check out these Azure Cheat Sheets:

https://tutorialsdojo.com/microsoft-azure-cheat-sheets/

41
Q

You have an Azure subscription that contains an Azure File Share named TDShare1 that contains sensitive data.

You want to ensure that only authorized users can access this data for compliance requirements, and users must only have access to specific files and folders.

You registered TDShare1 to use AD DS authentication and Azure AD Connect sync for specific AD user access.

You need to give your active directory users access to TDShare1.

What should you do?

A. Create a shared access signature (SAS) with a stored access policy.
B. Enable anonymous access to the storage account.
C. Use the storage account access keys for authentication.
D. Configure role-based access control (RBAC).

A

D. Configure role-based access control (RBAC).

Explanation:
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files SMB file shares are accessible from Windows, Linux, and macOS clients. Azure Files NFS file shares are accessible from Linux or macOS clients. Additionally, Azure Files SMB file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.

Once you’ve enabled an Active Directory (AD) source for your storage account, you must configure share-level permissions in order to get access to your file share. There are two ways you can assign share-level permissions. You can assign them to specific Azure AD users/groups, and you can assign them to all authenticated identities as a default share-level permission.

Since we are handling sensitive data, we want our users to be able to access files that they are only allowed to. Due to this, we need to assign specific Azure AD users or groups to access Azure file share resources.

In order for share-level permissions to work for specific Azure AD users or groups, you must:

Sync the users and the groups from your local AD to Azure AD using either the on-premises Azure AD Connect sync application or Azure AD Connect cloud sync.
Add AD synced groups to RBAC role so they can access your storage account.

Hence, the correct answer is: Configure role-based access control (RBAC).

The option that says: Enable anonymous access to the storage account is incorrect as it allows anyone to access the storage account and its contents without authentication.

The option that says: Create a shared access signature (SAS) with a stored access policy is incorrect because while SAS tokens can provide limited access to a storage account, they are not a suitable authentication mechanism for controlling access to sensitive data.

The option that says: Use the storage account access keys for authentication is incorrect because storage account keys provide full control over the storage account, which means that anyone with the key can perform any operation on the storage account. This makes them a less secure option, especially for sensitive data that requires fine-grained access control.

References:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

https://learn.microsoft.com/en-us/azure/storage/files/storage-files-identity-ad-ds-assign-permissions

Check out this Azure Files Cheat Sheet:

https://tutorialsdojo.com/azure-file-storage/

Azure Blob vs Disk vs File Storage:

https://tutorialsdojo.com/azure-blob-vs-disk-vs-file-storage/

42
Q

You are an Azure administrator responsible for managing storage accounts in your organization. You are asked to create a new Azure File Share with specific requirements using Azure CLI. Below are the requirements:

Data must still be available if a single availability zone experiences an outage.

It must provide consistent high performance and low latency.

The command that you intend to run: az storage account create –name TDShare1 –resource-group TD1 –location southeastasia –sku XXXX –kind XXXX

  1. –sku
    A. Premium_ZRS
    B. Standard_GRS
    C. Standard_GZRS
    D. Premium_LRS
  2. –kind
    A. BlobStorage
    B. BlockBlobStorage
    C. StorageV2
    D. FileStorage
A
  1. A. Premium_ZRS
  2. D. FileStorage

Explanation:
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files SMB file shares are accessible from Windows, Linux, and macOS clients. Azure Files NFS file shares are accessible from Linux or macOS clients. Additionally, Azure Files SMB file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.

The requirements in the scenario are:

– Data must still be available if a single availability zone experiences an outage.

– It must provide consistent high performance and low latency.

Premium file shares are backed by solid-state drives (SSDs) and provide consistent high performance and low latency, within single-digit milliseconds for most IO operations, for IO-intensive workloads. Premium file shares are suitable for a wide variety of workloads like databases, website hosting, and development environments. Premium file shares can be used with both Server Message Block (SMB) and Network File System (NFS) protocols.

Zone redundant storage (ZRS) provides high availability by synchronously writing three replicas of your data across three different Azure Availability Zones, thereby protecting your data from the cluster, data center, or entire zone outage. Zonal redundancy enables you to read and write data even if one of the availability zones is unavailable.

Currently, the SKUs supported for premium file share are premium_lrs and premium_zrs only.

Therefore, your –sku flag should be Premium_ZRS, since you need your data to be available even if there is an availability zone outage.

Conversely, your –kind flag should be FileStorage, as this allows you to deploy premium file shares

The exact command to create a storage account that satisfies the requirements is:

– az storage account create –name TDShare1 –resource-group TD1 –location southeastasia –sku Premium_ZRS –kind FileStorage

References:

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction

https://learn.microsoft.com/en-us/azure/storage/files/storage-files-planning

Check out this Azure Files Cheat Sheet:

https://tutorialsdojo.com/azure-file-storage/

Azure Blob vs Disk vs File Storage:

https://tutorialsdojo.com/azure-blob-vs-disk-vs-file-storage/

43
Q

You have an Azure subscription with the following resources:
az104-4-43
TD1 is unable to connect to TD4 via port 443. You need to troubleshoot why the communication between the two virtual machines is failing.

Which two features should you use?

A. Azure Diagnostics
B. Effective security rules
C. Log Analytics
D. Connection troubleshoot
E. IP flow verify
F. VPN troubleshoot

A

D. Connection troubleshoot
E. IP flow verify

Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products which includes Virtual Machines, Virtual Networks, Application Gateways, Load balancers, etc.

Connection troubleshoot helps reduce the amount of time to diagnose and troubleshoot network connectivity issues. The results returned can provide insights about the root cause of the connectivity problem and whether it’s due to a platform or user configuration issue.

Connection troubleshoot reduces the Mean Time To Resolution (MTTR) by providing a comprehensive method of performing all connection major checks to detect issues pertaining to network security groups, user-defined routes, and blocked ports.

IP flow verify checks if a packet is allowed or denied to or from a virtual machine. If the packet is denied by a security group, the name of the rule that denied the packet is returned.

IP flow verify looks at the rules for all Network Security Groups (NSGs) applied to the network interface, such as a subnet or virtual machine NIC. Traffic flow is then verified based on the configured settings to or from that network interface. IP flow verify is useful in confirming if a rule in a Network Security Group is blocking ingress or egress traffic to or from a virtual machine.

Therefore, the correct answers are:

– Connection troubleshoot

– IP flow verify

Effective security rules is incorrect because this simply allows you to see all inbound and outbound security rules that apply to a virtual machine’s network interface. This is also used for security compliance and auditing.

Azure Diagnostics is incorrect because it is an agent in Azure Monitor that collects monitoring data from the guest operating system of Azure compute resources, including virtual machines.

Log Analytics is incorrect because this is just a tool to edit and run log queries from data collected by Azure Monitor logs and interactively analyze their results.

VPN troubleshoot is incorrect because this only provides the capability to troubleshoot virtual network gateways and their connections. This is primarily used for diagnosing the traffic between your on-premises resources and Azure virtual networks.

References:

https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-ip-flow-verify-overview

https://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-connectivity-overview

Check out this Azure Virtual Network Cheat Sheet:

https://tutorialsdojo.com/azure-virtual-network-vnet

44
Q

Your organization has an AKS cluster that hosts several microservices as Kubernetes deployments. During peak hours, one of the deployments experiences high traffic, resulting in longer response times and occasional failures.

You plan to implement horizontal pod autoscaling to scale the deployment based on traffic.

What should you do?

A. Install Azure Monitor for Containers agent, then define a VPA object in the manifest file and set the desired min and max number of replicas in a deployment.
B. Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.
C. Install AKS cluster autoscaler, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment.
D. Install Kubernetes Dashboard, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment.

A

B. Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.

Explanation:
Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance. When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for and manage the nodes attached to the AKS cluster.

The horizontal pod autoscaler (HPA) is used by Kubernetes to monitor resource demand and automatically scale the number of pods. The HPA checks the Metrics API for any required changes in replica count every 15 seconds by default, and the Metrics API retrieves data from the Kubelet every 60 seconds. As a result, the HPA is updated every 60 seconds. When changes are made, the number of replicas is increased or decreased.

The following steps should be taken to configure horizontal pod autoscaling (HPA) for the deployment:

Install the Kubernetes Metrics Server to provide HPA with metrics.
In the Kubernetes manifest file, define a horizontal pod autoscaler object. This object specifies the scaled deployment, the minimum and maximum number of replicas, and the scaling metric.
Set the deployment’s minimum and maximum number of replicas. Based on the specified metric, these values determine the number of pods that the HPA feature can create or delete.

Hence, the correct answer is: Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.

The option that says: Install Kubernetes Dashboard, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect because the Kubernetes Dashboard does not provide HPA functionality. It is mainly used for deploying applications, creating and updating objects, and monitoring the health of the cluster.

The option that says: Install AKS cluster autoscaler, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect because the AKS cluster autoscaler scales the number of nodes in an AKS cluster rather than the number of replicas in a deployment.

The option that says: Install Azure Monitor for Containers agent, then define a VPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect. Instead of scaling the number of replicas, vertical pod autoscaling (VPA) is used to adjust the resource allocation of individual pods based on their resource usage.

References:

https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli#autoscale-pods

https://learn.microsoft.com/en-us/azure/aks/intro-kubernetes

Check out this Azure Kubernetes Service (AKS) Cheat Sheet:

https://tutorialsdojo.com/azure-kubernetes-service-aks/

45
Q

Your company is currently running a mission-critical application in a primary Azure region.

You plan to implement a disaster recovery by configuring failover to a secondary region using Azure Site Recovery.

What should you do?

A. Create a virtual network and subnet in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.
B. Create an RSV in the primary region, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region.
C. Create an RSV in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.
D. Create an Azure Traffic Manager profile to load-balance traffic between the primary and secondary regions, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region.

A

C. Create an RSV in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.

Explanation:
Azure Site Recovery service contributes to your business continuity and disaster recovery (BCDR) strategy by keeping your business applications online during planned and unplanned outages. Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VM), including replication, failover, and recovery.

Enabling replication for a virtual machine (VM) for disaster recovery purposes involves installing the Site Recovery Mobility service extension on the VM and registering it with Azure Site Recovery. During replication, any disk writes from the VM are first sent to a cache storage account in the source region. Subsequently, the data is transferred to the target region, where recovery points are generated from it. During a disaster recovery failover of the VM, a recovery point is used to restore the VM in the target region.

Here’s how to set up disaster recovery for a VM with Azure Site Recovery:

First, you need to create a Recovery Services Vault (RSV) in the secondary region, which will serve as the target location for the VM during a failover.
Next, you need to install and configure the Azure Site Recovery agent on the VMs that you want to protect. The agent captures data changes on the VM disks and sends them to Azure Site Recovery for replication to the secondary region.
Once the replication is set up, you need to design a recovery plan that outlines the steps to orchestrate the failover and failback operations. This includes defining the order in which VMs should be failed over, any dependencies between VMs, and the desired recovery point objective (RPO) and recovery time objective (RTO) for each VM.
During replication, VM disk writes are sent to a cache storage account in the source region, and from there to the target region, where recovery points are generated from the data. In the event of a disaster or planned failover, a recovery point is used to restore the VM in the target region, allowing the business to continue operations without significant downtime or data loss.

Hence, the correct answer is: Create an RSV in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations.

The option that says: Create an RSV in the primary region, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region is incorrect because although this will replicate the data to the secondary region, it does not include the necessary steps to perform failover. You still need to create a Recovery Services vault in the secondary region, not the primary region, to perform failover.

The option that says: Create a virtual network and subnet in the secondary region, install and configure the Azure Site Recovery agent on the VMs, and design a recovery plan to orchestrate failover and failback operations is incorrect because, just like the other options, you will still need to create a Recovery Services vault in the secondary region, install and configure the Azure Site Recovery agent on the virtual machines, and create a recovery plan to orchestrate failover and failback operations.

The option that says: Create an Azure Traffic Manager profile to load-balance traffic between the primary and secondary regions, install and configure the Azure Site Recovery agent on the VMs, and design a replication policy to replicate the data to the secondary region is incorrect because this will just load-balance traffic between the primary and secondary regions but won’t be able to perform failover. You will still need to create a Recovery Services vault in the secondary region to perform failover.

References:

https://learn.microsoft.com/en-us/azure/site-recovery/site-recovery-overview

https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-quickstart

Check out this Azure Global Infrastructure Cheat Sheet:

https://tutorialsdojo.com/azure-global-infrastructure/

46
Q

You have an Azure subscription containing an Azure virtual machine named Siargao with an assigned dynamic public IP address.
During routine maintenance, Siargao was deallocated and then started again.

The development team reports that their application hosted on Siargao has lost its connection with an external service. The external service whitelists the IP addresses allowed to access it. You suspect the public IP address has changed during the maintenance.

What should you do?

A. Modify Siargao to use a static public IP address.
B. Enable an Azure VPN gateway for Siargao.
C. Provision an Azure NAT gateway to provide outbound internet connectivity.
D. Attach multiple dynamic public IP addresses to Siargao.

A

A. Modify Siargao to use a static public IP address.

Explanation:
Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other, the Internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own data center but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.

Public IP addresses allow Internet resources to communicate inbound to Azure resources. Public IP addresses enable Azure resources to communicate with the Internet and public-facing Azure services. The address is dedicated to the resource until it’s unassigned by you. A resource without a public IP assigned can communicate outbound.

IP addresses in Azure can be either dynamic or static. By default, Azure assigns a dynamic IP address to the VM. When the VM is started, Azure assigns it an IP address, and when the VM is stopped (deallocated), that IP address is returned to the pool and can be assigned to a different VM. This means that when you stop and start a VM, it can get a different public IP address, which can cause problems if you have systems or services that rely on the specific IP address of that VM, such as an external service that whitelists specific IP addresses.

A static IP address, unlike a dynamic IP address, does not change when the VM is deallocated. Once a static IP address is assigned to a VM, that IP is reserved for the VM and won’t be assigned to any other VM, even when the original VM is stopped. This means the VM would keep the same IP address throughout its lifecycle, regardless of its state.

In this case, to solve the issue, we need to modify Siargao to use a static public IP address instead of a dynamic public IP address.

Hence, the correct answer is: Modify Siargao to use a static public IP address.

The statement that says: Enable an Azure VPN gateway for Siargao is incorrect. Azure VPN Gateway is used to establish secure, cross-premises connectivity between your virtual network within Azure and your on-premises network, but it doesn’t provide static public IP functionality for individual VMs.

The statement that says: Attach multiple dynamic public IP addresses to Siargao is incorrect because assigning multiple dynamic public IP addresses would not solve the issue, as these dynamic IP addresses can still change when the VM is deallocated.

The statement that says: Provision an Azure NAT gateway to provide outbound internet connectivity is incorrect because Azure NAT Gateway is a service that provides outbound-only internet connectivity for the VMs in your virtual network. However, it doesn’t help in maintaining the same public IP address of a VM during its deallocation and reallocation.

46
Q

Your organization has an AKS cluster that hosts several microservices as Kubernetes deployments. During peak hours, one of the deployments experiences high traffic, resulting in longer response times and occasional failures.

You plan to implement horizontal pod autoscaling to scale the deployment based on traffic.

What should you do?

A. Install Azure Monitor for Containers agent, then define a VPA object in the manifest file and set the desired min and max number of replicas in a deployment.
B. Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.
C. Install AKS cluster autoscaler, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment.
D. Install Kubernetes Dashboard, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment.

A

B. Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.

Explanation:
Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance. When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for and manage the nodes attached to the AKS cluster.

The horizontal pod autoscaler (HPA) is used by Kubernetes to monitor resource demand and automatically scale the number of pods. The HPA checks the Metrics API for any required changes in replica count every 15 seconds by default, and the Metrics API retrieves data from the Kubelet every 60 seconds. As a result, the HPA is updated every 60 seconds. When changes are made, the number of replicas is increased or decreased.

The following steps should be taken to configure horizontal pod autoscaling (HPA) for the deployment:

Install the Kubernetes Metrics Server to provide HPA with metrics.
In the Kubernetes manifest file, define a horizontal pod autoscaler object. This object specifies the scaled deployment, the minimum and maximum number of replicas, and the scaling metric.
Set the deployment’s minimum and maximum number of replicas. Based on the specified metric, these values determine the number of pods that the HPA feature can create or delete.

Hence, the correct answer is: Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.

The option that says: Install Kubernetes Dashboard, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect because the Kubernetes Dashboard does not provide HPA functionality. It is mainly used for deploying applications, creating and updating objects, and monitoring the health of the cluster.

The option that says: Install AKS cluster autoscaler, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect because the AKS cluster autoscaler scales the number of nodes in an AKS cluster rather than the number of replicas in a deployment.

The option that says: Install Azure Monitor for Containers agent, then define a VPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect. Instead of scaling the number of replicas, vertical pod autoscaling (VPA) is used to adjust the resource allocation of individual pods based on their resource usage.

References:

https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli#autoscale-pods

https://learn.microsoft.com/en-us/azure/aks/intro-kubernetes

Check out this Azure Kubernetes Service (AKS) Cheat Sheet:

https://tutorialsdojo.com/azure-kubernetes-service-aks/

47
Q

Your organization has an Azure subscription that contains an AKS cluster running an older version of Kubernetes.

You have been assigned to upgrade the cluster to the latest stable version of Kubernetes.

What should you do?

A. Stop all workloads, scale down the cluster to zero nodes, delete the cluster, create a new AKS cluster, and redeploy the application workloads.
B. Run az aks get-upgrades in Azure CLI to upgrade the AKS cluster to the latest Kubernetes version.
C. Plan and execute the upgrade by reviewing release notes, determining a maintenance window, and upgrading the AKS cluster via Azure Portal.
D. Create a new AKS cluster with the desired Kubernetes version, migrate the application workloads from the old cluster to the new cluster, and then delete the old cluster.

A

C. Plan and execute the upgrade by reviewing release notes, determining a maintenance window, and upgrading the AKS cluster via Azure Portal.

Explanation:
Azure Kubernetes Service (AKS) is a managed container orchestration service provided by Microsoft Azure. It simplifies the deployment, management, and scaling of containerized applications using Kubernetes. AKS abstracts away the underlying infrastructure and handles the operational aspects of managing a Kubernetes cluster, allowing developers and DevOps teams to focus on deploying and managing their applications.

Periodic upgrades to the latest Kubernetes version are part of the AKS cluster lifecycle. It is critical that you apply the most recent security updates or upgrade to get the most recent features. In Azure, you can upgrade a cluster using Azure CLI, PowerShell or Portal.

AKS performs the following operations during the cluster upgrade process:

-Add a new buffer node to the cluster that runs the specified Kubernetes version (or as many nodes as configured in max surge).

-To minimize disruption to running applications, cordon and drain one of the old nodes. When you use max surge, it cordons and drains as many nodes as the number of buffer nodes you specify.

-When the old node is completely depleted, it is reimaged to receive the new version and serves as a buffer node for the next node to be upgraded.

-This process is repeated until all cluster nodes have been upgraded.

-At the end of the process, the last buffer node is deleted while the existing agent node is kept.

Hence, the correct answer is: Plan and execute the upgrade by reviewing release notes, determining a maintenance window, and upgrading the AKS cluster via Azure Portal.

The option that says: Run az aks get-upgrades in Azure CLI to upgrade the AKS cluster to the latest Kubernetes version is incorrect because this command won’t upgrade the cluster but it will just get the upgrade versions available for a managed Kubernetes cluster.

The option that says: Stop all workloads, scale down the cluster to zero nodes, delete the cluster, create a new AKS cluster, and redeploy the application workloads is incorrect because deleting the cluster and redeploying all the application workloads would result in unnecessary downtime and resource loss, as well as potential issues in recreating the cluster and redeploying the applications.

The option that says: Create a new AKS cluster with the desired Kubernetes version, migrate the application workloads from the old cluster to the new cluster, and then delete the old cluster is incorrect because this approach would involve unnecessary complexity and downtime for migrating the workloads between clusters, which can be avoided by upgrading the existing cluster directly.

References:

https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster

https://learn.microsoft.com/en-us/azure/aks/auto-upgrade-cluster

Check out this Azure Kubernetes Service (AKS) Cheat Sheet:

https://tutorialsdojo.com/azure-kubernetes-service-aks/

48
Q

Your organization has an AKS cluster that hosts several microservices as Kubernetes deployments. During peak hours, one of the deployments experiences high traffic, resulting in longer response times and occasional failures.

You plan to implement horizontal pod autoscaling to scale the deployment based on traffic.

What should you do?

A. Install Azure Monitor for Containers agent, then define a VPA object in the manifest file and set the desired min and max number of replicas in a deployment.
B. Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.
C. Install AKS cluster autoscaler, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment.
D. Install Kubernetes Dashboard, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment.

A

B. Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.

Explanation:
Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance. When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for and manage the nodes attached to the AKS cluster.

The horizontal pod autoscaler (HPA) is used by Kubernetes to monitor resource demand and automatically scale the number of pods. The HPA checks the Metrics API for any required changes in replica count every 15 seconds by default, and the Metrics API retrieves data from the Kubelet every 60 seconds. As a result, the HPA is updated every 60 seconds. When changes are made, the number of replicas is increased or decreased.

The following steps should be taken to configure horizontal pod autoscaling (HPA) for the deployment:

Install the Kubernetes Metrics Server to provide HPA with metrics.
In the Kubernetes manifest file, define a horizontal pod autoscaler object. This object specifies the scaled deployment, the minimum and maximum number of replicas, and the scaling metric.
Set the deployment’s minimum and maximum number of replicas. Based on the specified metric, these values determine the number of pods that the HPA feature can create or delete.

Hence, the correct answer is: Install Kubernetes Metrics Server, then define an HPA object in the manifest file and set the min and max number of replicas in a deployment.

The option that says: Install Kubernetes Dashboard, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect because the Kubernetes Dashboard does not provide HPA functionality. It is mainly used for deploying applications, creating and updating objects, and monitoring the health of the cluster.

The option that says: Install AKS cluster autoscaler, then define an HPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect because the AKS cluster autoscaler scales the number of nodes in an AKS cluster rather than the number of replicas in a deployment.

The option that says: Install Azure Monitor for Containers agent, then define a VPA object in the manifest file and set the desired min and max number of replicas in a deployment is incorrect. Instead of scaling the number of replicas, vertical pod autoscaling (VPA) is used to adjust the resource allocation of individual pods based on their resource usage.

References:

https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli#autoscale-pods

https://learn.microsoft.com/en-us/azure/aks/intro-kubernetes

Check out this Azure Kubernetes Service (AKS) Cheat Sheet:

https://tutorialsdojo.com/azure-kubernetes-service-aks/

48
Q
A