Review Mode Set 4 Dojo Flashcards
(40 cards)
You have an Azure subscription that contains hundreds of network resources.
You need to recommend a solution that will allow you to monitor resources in one centralized console for network monitoring.
What solution should you recommend?
A. Azure Monitor Network Insights
B. Azure Virtual Network
C. Azure Traffic Manager
D. Azure Advisor
A. Azure Monitor Network Insights
Explanation:
Azure Monitor maximizes the availability and performance of your applications and services by delivering a solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on.
Azure Monitor Network Insights provides a comprehensive view of health and metrics for all deployed network resources without requiring any configuration. It also provides access to network monitoring capabilities like Connection Monitor, flow logging for network security groups (NSGs), and Traffic Analytics. And it provides other network diagnostic features. Key features of Network Insight:
– Single console for network monitoring
– No agent configuration required
– Access to health state, metrics, alerts, & data from traffic and connectivity monitoring tools in one place
– View network topology with functional dependencies for simpler troubleshooting
– Access resources metrics to debug issues without writing queries or authoring workbooks
Hence, the correct answer is: Azure Monitor Network Insights.
Azure Virtual Network is incorrect because this service simply allows your resources, such as virtual machines, to securely communicate with each other, the internet, and on-premises networks. VNet is similar to a traditional network that you’d operate in your own data center but brings with it additional benefits of Azure’s infrastructure such as scale, availability, and isolation.
Azure Traffic Manager is incorrect because this is simply a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions while providing high availability and responsiveness. However, you cannot use this to monitor your network resources.
Azure advisor is incorrect because this service just helps you improve the cost-effectiveness, performance, reliability (formerly called high availability), and security of your Azure resources.
Your organization has a Microsoft Entra ID subscription that is associated with the directory TD-Siargao.
You have been tasked to implement a conditional access policy.
The policy must require the DevOps group to use multi-factor authentication and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra from untrusted locations.
Solution: Create a conditional access policy and enforce grant control.
Does the solution meet the goal?
A. No
B. Yes
B. Yes
Explanation:
Microsoft Entra ID enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.
With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.
There are two types of access controls in a conditional access policy:
Grant – enforces grant or block access to resources. Session – enable limited experiences within specific cloud applications
Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra from untrusted locations. The given solution is to enforce grant access control. If you check the image above, the grant control satisfies this requirement.
Hence, the correct answer is: Yes.
Your organization has an Azure AD subscription that is associated with the directory TD-Siargao.
You have been tasked to implement a conditional access policy.
The policy must require the DevOps group to use multi-factor authentication and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations.
Solution: Create a conditional access policy and enforce session control.
Does the solution meet the goal?
A. Yes
B. No
B. No
Explanation:
Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.
With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.
There are two types of access controls in a conditional access policy:
Grant – enforces grant or block access to resources. Session – enable limited experiences within specific cloud applications
Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Azure AD joined device when connecting to Azure AD from untrusted locations. The given solution is to enforce session access control. If you check the image above, the session control doesn’t have options to require the use of MFA and AD joined devices.
Hence, the correct answer is: No.
Your organization has a Microsoft Entra subscription that is associated with the directory TD-Siargao.
You have been tasked to implement a conditional access policy.
The policy must require the DevOps group to use multi-factor authentication and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra ID from untrusted locations.
Solution: Go to the security option in Microsoft Entra and configure MFA.
Does the solution meet the goal?
A. No
B. Yes
A. No
Explanation:
Microsoft Entra ID enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks. The single sign-on is an authentication method that simplifies access to your apps from anywhere. While conditional access and multi-factor authentication help protect and govern access to your resources.
With conditional access, you can implement automated access-control decisions for accessing your cloud apps based on conditions. Conditional access policies are enforced after the first-factor authentication has been completed. It’s not intended to be a first-line defense against denial-of-service (DoS) attacks, but it uses signals from these events to determine access.
There are two types of access controls in a conditional access policy:
Grant – enforces grant or block access to resources. Session – enable limited experiences within specific cloud applications
Going back to the scenario, the requirement is to enforce a policy to the members of the DevOps group to use MFA and a hybrid Microsoft Entra joined device when connecting to Microsoft Entra from untrusted locations. The given solution is to configure MFA in Microsoft Entra security. If you check the question again, there is a line “You have been tasked to implement a conditional access policy.” This means that you must create a conditional access policy and enforce grant control. Also, configuring MFA does not enable the option to require the use of a Microsoft Entra joined device.
Hence, the correct answer is: No.
Your company created several Azure virtual machines and a file share in the subscription TD-Boracay. The VMs are all part of the same virtual network.
You have been assigned to manage the on-premises Hyper-V server replication to Azure.
To support the planned deployment, you will need to create additional resources in TD-Boracay.
Which of the following options should you create?
A. Replication Policy
B. Azure Storage Account
C. VNet Service Endpoint
D. Hyper-V site
E. Azure Recovery Services Vault
F. Azure ExpressRoute
A. Replication Policy
D. Hyper-V site
E. Azure Recovery Services Vault
Explanation:
Azure Virtual Machines is one of several types of on-demand, scalable computing resources that Azure offers. It gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks such as configuring, patching, and installing the software that runs on it.
Hyper-V is Microsoft’s hardware virtualization product. It lets you create and run a software version of a computer called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and programs. Hyper-V runs each virtual machine in its own isolated space, which means you can run more than one virtual machine on the same hardware at the same time.
A Recovery Services vault is a management entity that stores recovery points created over time and provides an interface to perform backup-related operations.
A replication policy defines the settings for the retention history of recovery points. The policy also defines the frequency of app-consistent snapshots.
To set up disaster recovery of on-premises Hyper-V VMs to Azure, you should complete the following steps:
Select your replication source and target – to prepare the infrastructure, you will need to create a Recovery Services vault. After you created the vault, you can now accomplish the protection goal, as shown in the image above. Set up the source replication environment, including on-premises Site Recovery components and the target replication environment – to set up the source environment, you need to create a Hyper-V site and add to that site the Hyper-V hosts containing the VMs that you want to replicate. The target environment will be the subscription and the resource group in which the Azure VMs will be created after failover. Create a replication policy Enable replication for a VM
Hence, the correct answers are:
– Hyper-V site
– Azure Recovery Services Vault
– Replication Policy
Azure Storage Account is incorrect because before you can create an Azure file share, you need to create a storage account first. Instead of creating a storage account again, you should set up a Hyper-V site.
Azure ExpressRoute is incorrect because this service is simply used to establish a private connection between your on-premises data center or corporate network to your Azure cloud infrastructure. It does not have the capability to replicate the Hyper-V server to Azure.
VNet Service Endpoint is incorrect because this option will only remove public internet access to resources and allow traffic only from your virtual network. Remember that the main requirement is to replicate the Hyper-V server to Azure. Therefore, this option wouldn’t satisfy the requirement.
Your company has five branch offices and a Microsoft Entra ID to centrally manage all identities and application access.
You have been tasked with granting permission to local administrators to manage users and groups within their scope.
What should you do?
A. Create an administrative unit.
B. Assign a Microsoft Entra role.
C. Assign an Azure role.
D. Create management groups.
A. Create an administrative unit.
Explanation:
Microsoft Entra ID is a cloud-based identity and access management service that enables your employees access external resources. Example resources include Microsoft 365, the Azure portal, and thousands of other SaaS applications.
Microsoft Entra ID also helps them access internal resources like apps on your corporate intranet, and any cloud apps developed for your own organization.
For more granular administrative control in Microsoft Entra ID, you can assign a Microsoft Entra role with a scope limited to one or more administrative units.
Administrative units limit a role’s permissions to any portion of your organization that you define. You could, for example, use administrative units to delegate the Helpdesk Administrator role to regional support specialists, allowing them to manage users only in the region for which they are responsible.
Hence, the correct answer is: Create an administrative unit.
The option that says: Assign a Microsoft Entra role is incorrect because if you assign an administrative role to a user that is not a member of an administrative unit, the scope of this role is within the directory.
The option that says: Create a management group is incorrect because this is just a container to organize your resources and subscriptions. This option won’t help you grant permission to local administrators to manage users and groups.
The option that says: Assign an Azure role is incorrect because the requirement is to grant local administrators permission only in their respective offices. If you use an Azure role, the user will be able to manage other Azure resources. Therefore, you need to use administrative units so the administrators can only manage users in the region that they support.
Your company has a web app hosted in Azure Virtual Machine.
You plan to create a backup of TD-VM1 but the backup pre-checks displayed a warning state.
What could be the reason?
A. The Recovery Services vault lock type is read-only.
B. The TD-VM1 data disk is unattached.
C. The status of TD-VM1 is deallocated.
D. The latest VM Agent is not installed in TD-VM1
D. The latest VM Agent is not installed in TD-VM1
Explanation:
Azure Virtual Machine is an image service instance that provides on-demand and scalable computing resources with usage-based pricing. More broadly, a virtual machine behaves like a server: it is a computer within a computer that provides the user the same experience they would have on the host operating system itself. To protect your data, you can use Azure Backup to create recovery points that can be stored in geo-redundant recovery vaults.
A Recovery Services vault is a management entity that stores recovery points created over time and provides an interface to perform backup-related operations. These operations include taking on-demand backups, performing restores, and creating backup policies.
Backup Pre-Checks, as the name implies, check the configuration of your VMs for issues that may affect backups and aggregate this information so that you can view it directly from the Recovery Services Vault dashboard. It also provides recommendations for corrective measures to ensure successful file-consistent or application-consistent backups, wherever applicable.
Backup Pre-Checks are performed as part of your Azure VMs’ scheduled backup operations and result in one of the following states:
Passed: This state indicates that your VMs configuration is conducive for successful backups and no corrective action needs to be taken. Warning: This state indicates one or more issues in VM’s configuration that might lead to backup failures and provides recommended steps to ensure successful backups. Not having the latest VM Agent installed, for example, can cause backups to fail intermittently and falls in this class of issues. Critical: This state indicates one or more critical issues in the VM’s configuration that will lead to backup failures and provides required steps to ensure successful backups. A network issue caused due to an update to the NSG rules of a VM, for example, will fail backups as it prevents the VM from communicating with the Azure Backup service and falls in this class of issues.
As stated above, the reason why backup pre-checks displayed a warning state is because of the VM agent. The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. As new VMs are deployed to Azure, they receive the latest VM agent at VM provision time.
If you have installed the agent manually or are deploying custom VM images you will need to manually update to include the new VM agent at image creation time. To check for the Azure VM Agent in your machine, open Task Manager and look for a process name WindowsAzureGuestAgent.exe.
Hence, the correct answer is: The latest VM Agent is not installed in TD-VM1.
The option that says: The Recovery Services vault lock type is read-only is incorrect because you can’t create a backup if the configured lock type is read-only. If you attempted to backup a virtual machine with a resource lock, the operation won’t be performed, and notify you to remove the lock first.
The option that says: The TD-VM1 data disk is unattached is incorrect because you don’t need to attach a data disk to the virtual machine when creating a backup. To enable VM backup, you need to have a VM agent and Recovery Services vault.
The option that says: The status of TD-VM1 is deallocated is incorrect because you can still create a backup even if the status of your virtual machine is stopped (deallocated).
Your company eCommerce website is deployed in an Azure virtual machine named TD-BGC.
You created a backup of the TD-BGC and implemented the following changes:
– Change the local admin password.
– Create and attach a new disk.
– Resize the virtual machine.
– Copy the log reports to the data disk.
You received an email that the admin restore the TD-BGC using the replace existing configuration.
Which of the following options should you perform to bring back the changes in TD-BGC?
A. Create and attach a new disk.
B. Change the local admin password.
C. Copy the log reports to the data disk.
D. Resize the virtual machine.
C. Copy the log reports to the data disk.
Explanation:
Azure Backup is a cost-effective, secure, one-click backup solution that’s scalable based on your backup storage needs. The centralized management interface makes it easy to define backup policies and protect a wide range of enterprise workloads, including Azure Virtual Machines, SQL and SAP databases, and Azure file shares.
Azure Backup provides several ways to restore a VM:
Create a new VM – quickly creates and gets a basic VM up and running from a restore point. Restore disk – restores a VM disk, which can then be used to create a new VM. Replace existing – restore a disk, and use it to replace a disk on the existing VM. Cross-Region (secondary region) – restore Azure VMs in the secondary region, which is an Azure paired region.
The restore configuration that is given in the scenario is the replace existing option. Azure Backup takes a snapshot of the existing VM before replacing the disk, and stores it in the staging location you specify. The existing disks connected to the VM are replaced with the selected restore point.
The snapshot is copied to the vault, and retained in accordance with the retention policy. After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren’t needed.
Since you restore the VM using the backup data, the new disk won’t have a copy of the log reports. To bring back the changes in the TD-BGC virtual machine, you will need to copy the log reports to the disk.
Hence, the correct answer is: Copy the log reports to the data disk.
The option that says: Change the local admin password is incorrect because the new password will not be overridden with the old password using the restore VM option. Therefore, you can use the updated password to connect via RDP to the machine.
The option that says: Create and attach a new disk is incorrect because the new disk does not contain the log reports. Instead of creating a new disk, you should attach the existing data disk that contains the log reports.
The option that says: Resize the virtual machine is incorrect because the only changes that will retain after rolling back are the VM size and the account password.
Your company plans to store media assets in two Azure regions.
You are given the following requirements:
Media assets must be stored in multiple availability zones Media assets must be stored in multiple regions Media assets must be readable in the primary and secondary regions.
Which of the following data redundancy options should you recommend?
A. Locally redundant storage
B. Zone-redundant storage
C. Geo-redundant storage
D. Read-access geo-redundant storage
D. Read-access geo-redundant storage
Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:
Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability. Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability. Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.
Take note, one of the requirements states that you need the media assets must be readable in the primary and secondary regions. With Geo-redundant storage, your media assets are stored in multiple availability zones and multiple regions. But read access will only be available in the secondary region if you or Microsoft initiates a failover from the primary region to the secondary region.
In order to have read access in the primary and secondary region at all times without having the need to initiate a failover, you need to recommend Read-access geo-redundant storage.
Hence, the correct answer is: Read-access geo-redundant storage.
Locally redundant storage is incorrect because the media assets will only be stored in one physical location.
Zone-redundant storage is incorrect. It only satisfies one requirement which is to store the media assets in multiple availability zones. You still need to store your media assets in multiple regions which ZRS is unable to do.
Geo-redundant storage is incorrect because the requirement states that you need read access to the primary and secondary regions. With GRS, the data in the secondary region isn’t available for read access. You can only have read access in the secondary region if a failover from the primary region to the secondary region is initiated by you or Microsoft.
Tutorials Dojo has a subscription named TDSub1 that contains the following resources:
AZ104-D-17 image
TDVM1 needs to connect to a newly created virtual network named TDNET1 that is located in Japan West.
What should you do to connect TDVM1 to TDNET1?
Solution: You create a network interface in TD1 in the South East Asia region.
Does this meet the goal?
A. No
B. Yes
A. No
Explanation:
A network interface enables an Azure Virtual Machine to communicate with internet, Azure, and on-premises resources. When creating a virtual machine using the Azure portal, the portal creates one network interface with default settings for you.
You may instead choose to create network interfaces with custom settings and add one or more network interfaces to a virtual machine when you create it. You may also want to change default network interface settings for an existing network interface.
Remember these conditions and restrictions when it comes to network interfaces:
– A virtual machine can have multiple network interfaces attached but a network interface can only be attached to a single virtual machine.
– The network interface must be located in the same region and subscription as the virtual machine that it will be attached to.
– When you delete a virtual machine, the network interface attached to it will not be deleted.
– In order to detach a network interface from a virtual machine, you must shut down the virtual machine first.
– By default, the first network interface attached to a VM is the primary network interface. All other network interfaces in the VM are secondary network interfaces.
The solution proposed in the question is incorrect because the virtual network is not located in the same region as TDVM1. Take note that a virtual machine, virtual network and network interface must be in the same region or location.
You need to first redeploy TDVM1 from South East Asia to Japan West region and then create and attach the network interface in to TDVM1 in the Japan West region.
Hence, the correct answer is: No.
You have the following public load balancers deployed in Davao-Subscription1.
TD1 - Standard
TD2 - Basic
You provisioned two groups of virtual machines containing 5 virtual machines each where the traffic must be load balanced to ensure the traffic are evenly distributed.
Which of the following health probes are not available for TD2?
A. HTTP
B. TCP
C. RDP
D. HTTPS
D. HTTPS
Explanation:
Azure Load balancer provides a higher level of availability and scale by spreading incoming requests across virtual machines (VMs). A private load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that is load balanced. Front-end IP addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources.
Remember that although cheaper, load balancers with the basic SKU have limited features compared to a standard load balancer. Basic load balancers are only useful for testing in development environments but when it comes to production workloads, you need to upgrade your basic load balancer to standard load balancer to fully utilize the features of Azure Load Balancer.
Take note, the protocols supported by the health probes of a basic load balancer only support HTTP and TCP protocols.
Hence, the correct answer is: HTTPS.
HTTP and TCP are incorrect because these are supported protocols for health probes using basic load balancer.
RDP is incorrect because this protocol is not supported by Azure Load Balancer.
You have an Azure subscription that contains the following storage accounts:
TD1 - general-purpose v1 - Locally redundant storage
TD2 - general-purpose v1 - Geo redundant storage
There is a compliance requirement where in the data in TD1 and TD2 must be available if a single availability zone in a region fails. The solution must minimize costs and administrative effort.
What should you do first?
A. Upgrade TD1 and TD2 to general-purpose v2
B. Upgrade TD1 and TD2 to zone-redundant storage
C. Configure lifecycle policy
D. Upgrade TD1 to geo-redundant storage
A. Upgrade TD1 and TD2 to general-purpose v2
Explanation:
Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:
Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability. Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability. Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region. Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.
The main requirement is that you need to ensure the data in TD1 and TD2 are available if a single availability zone fails while minimizing costs and administrative effort.
Between the redundancy options, zone-redundant storage fits the requirement of protecting your data by copying the data synchronously across three Azure availability zones. So even if a single availability zone fails, you still have two availability zones that are available.
Remember, ZRS is not a supported redundancy option under general-purpose v1. The first thing you need to do is to upgrade your storage account to general-purpose v2 and then upgrade the replication type to ZRS.
Hence, the correct answer is: Upgrade TD1 and TD2 to general-purpose v2.
The option that says: Upgrade TD1 and TD2 to zone-redundant storage is incorrect because zone-redundant storage is not supported under general-purpose v1.
The option that says: Upgrade TD1 to geo-redundant storage is incorrect because one of the requirements is to minimize cost. With ZRS, you have already satisfied the data availability requirement.
The option that says: Configure lifecycle policy is incorrect because this is simply a rule-based policy that you can use to transition blob data to the appropriate access tiers or to expire data at the end of the data lifecycle.
Your organization has a domain named tutorialsdojo.com.
You want to host your records in Microsoft Azure.
Which three actions should you perform?
A. Copy the Azure DNS NS records
B. Copy the Azure DNS A records
C. Create an Azure private DNS zone
D. Create an Azure public DNS zone
E. Update the Azure NS records to your domain registrar
F. Update the Azure A records to your domain registrar
A. Copy the Azure DNS NS records
D. Create an Azure public DNS zone
E. Update the Azure NS records to your domain registrar
Explanation:
Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
Using custom domain names helps you to tailor your virtual network architecture to best suit your organization’s needs. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Additionally, you can configure zone names with a split-horizon view, which allows a private and a public DNS zone to share the name.
You can use Azure DNS to host your DNS domain and manage your DNS records. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services.
Since you own tutorialsdojo.com from a domain name registrar you can then create a zone with the name tutorialsdojo.com in Azure DNS. Since you’re the owner of the domain, your registrar allows you to configure the Nameserver (NS) records to your domain allowing internet users around the world are then directed to your domain in your Azure DNS zone whenever they try to resolve tutorialsdojo.com.
The steps in registering your Azure public DNS records are:
Create your Azure public DNS zone Retrieve name servers – Azure DNS gives name servers from a pool each time a zone is created. Delegate the domain – Once the DNS zone gets created and you have the name servers, you’ll need to update the parent domain with the Azure DNS name servers.
Hence, the correct answers are:
– Create an Azure public DNS zone
– Update the Azure NS records to your domain registrar
– Copy the Azure DNS NS records
The options that say: Copy the Azure DNS A records and Update the Azure A records to your domain registrar is incorrect because you need to copy the nameserver records instead of the A record. An A record is a type of DNS record that points a domain to an IP address.
The option that says: Create an Azure private DNS zone is incorrect because this simply manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. The requirement states that the users must be able to access tutorialsdojo.com via the internet. You need to deploy an Azure public DNS zone instead.
You plan to deploy the following public IP addresses in your Azure subscription shown in the following table:
TD1 | Basic | Static
TD2 | Basic | Dynamic
TD3 | Standard | Static
TD4 | Standard | Dynamic
You need to associate a public IP address to a public Azure load balancer with an SKU of standard.
Which of the following IP addresses can you use?
A. TD1
B. TD3
C. TD3 and TD4
D. TD1 and TD2
B. TD3
Explanation:
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual network. These connections are accomplished by translating their private IP addresses to public IP addresses. Public Load Balancers are used to load balance Internet traffic to your VMs.
A public IP associated with a load balancer serves as an Internet-facing frontend IP configuration. The frontend is used to access resources in the backend pool. The frontend IP can be used for members of the backend pool to egress to the Internet.
Remember that the SKU of a load balancer and the SKU of a public IP address SKU must match when you use them with public IP addresses meaning if you have a load balancer with an SKU of standard, you must provision a public IP address with an SKU of standard also.
Hence, the correct answer is: TD3.
The options that say: TD1 and TD1 and TD2 are incorrect because both public IP addresses have an SKU of basic. You must provision a public IP address with a SKU of standard so you can associate it with a standard public load balancer.
The option that says: TD3 and TD4 is incorrect because you can only create a standard public IP address with an assignment of static.
For each of the following items, choose Yes if the statement is true or choose No if the statement is false. Take note that each correct item is worth one point.
Questions Yes No 1. You can rehydrate a blob data in archive tier instantly
- You can rehydrate a blob data in archive tier without costs
- You can access your blob data that is in archive tier
- No
- No
- No
Explanation:
Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. The available access tiers include:
Hot – Optimized for storing data that is accessed frequently.
Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.
Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).
While a blob is in the archive access tier, it’s considered offline and can’t be read or modified. The blob metadata remains online and available, allowing you to list the blob and its properties. Reading and modifying blob data is only available with online tiers such as hot or cool.
To read data in archive storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and can take hours to complete.
A rehydration operation with Set Blob Tier is billed for data read transactions and data retrieval size. High-priority rehydration has higher operation and data retrieval costs compared to standard priority. High-priority rehydration shows up as a separate line item on your bill.
The statement that says: You can rehydrate a blob data in archive tier without costs is incorrect. You are billed for data read transactions and data retrieval size (per GB).
The statement that says: You can rehydrate a blob data in archive tier instantly is incorrect. Rehydrating a blob from the Archive tier can take several hours to complete.
The statement that says: You can access your blob data that is in archive tier is incorrect because blob data stored in the archive tier is considered to be offline and can’t be read or modified.
You deployed an Ubuntu server using an Azure virtual machine.
You need to monitor the system performance metrics and log events.
Which of the following options would you use?
A. Azure Performance Diagnostics VM Extension
B. Boot diagnostics
C. Connection monitor
D. Linux Diagnostic Extension
D. Linux Diagnostic Extension
Explanation:
Azure Diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. It collects guest metrics into Azure Monitor Metrics and sends guest logs and metrics to Azure storage for archiving.
Azure Performance Diagnostics VM Extension helps collect performance diagnostic data from Windows VMs. The extension performs analysis and provides a report of findings and recommendations to identify and resolve performance issues on the virtual machine.
The Linux Diagnostic Extension will help you monitor the health of a Linux VM running on Microsoft Azure. It has the following capabilities:
– Collects system performance metrics from the VM and stores them in a specific table in a designated storage account.
– Retrieves log events from syslog and stores them in a specific table in the designated storage account.
– Enables users to customize the data metrics that are collected and uploaded.
– Enables users to customize the syslog facilities and severity levels of events that are collected and uploaded.
– Enables users to upload specified log files to a designated storage table.
– Supports sending metrics and log events to arbitrary EventHub endpoints and JSON-formatted blobs in the designated storage account.
With this extension, you can now monitor the system performance metrics and log events of the virtual machine.
Hence, the correct answer is: Linux Diagnostic Extension.
Azure Performance Diagnostics VM Extension is incorrect because this extension only collects performance diagnostic data from Windows VMs.
Boot diagnostics is incorrect because this feature is primarily used to diagnose VM boot failures and not for monitoring the system performance metrics and log events.
Connection monitor is incorrect because this is simply used for end-to-end connection monitoring.
You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.
Solution: Configure a packet capture in Azure Network Watcher.
Does the solution meet the goal?
A. Yes
B. No
A. Yes
Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.
With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.
The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.
Hence, the correct answer is: Yes.
You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.
Solution: Create a connection monitor in Azure Network Watcher.
Does the solution meet the goal?
A. Yes
B. No
B. No
Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.
With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.
The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.
The solution provided is to set up a Connection Monitor in Azure Network Watcher. Connection Monitor’s primary use case is to track connectivity between your on-premises setups and the Azure VMs/virtual machine scale sets that host your cloud application. You cannot use this feature to capture packets to and from your virtual machines in a virtual network because it is not supported.
Hence, the correct answer is: No.
You plan to record all sessions to track traffic to and from your virtual machines for a period of 3600 seconds.
Solution: Use IP flow verify in Azure Network Watcher.
Does the solution meet the goal?
A. Yes
B. No
B. No
Explanation:
Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Network Watcher is designed to monitor and repair the network health of IaaS (Infrastructure-as-a-Service) products including Virtual Machines (VM), Virtual Networks, Application Gateways, Load balancers, etc.
With Packet Capture, you can create packet capture sessions to track traffic to and from a virtual machine. It also helps diagnose network anomalies both reactively and proactively. But in order to use this feature, the virtual machine must have the Azure Network Watcher extension.
The packet capture output (.cap) file can be saved in a storage account and/or on the target virtual machine. You can also filter the protocol, IP addresses, and ports when adding a packet capture. Keep in mind that the maximum duration of capturing sessions is 5 hours.
The provided solution is to use IP flow verify in Azure Network Watcher. The main use case of IP flow verify is to determine whether a packet to or from a virtual machine is allowed or denied based on 5-tuple information and not to capture packets from your virtual machines for a period of 3600 seconds or 1 hour.
Hence, the correct answer is: No.
A company deployed a Grafana image in Azure Container Apps with the following configurations:
Resource Group: tdrg-grafana Region: Canada Central Zone Redundancy: Disabled Virtual Network: Default IP Restrictions: Allow
The container’s public IP address was provided to development teams in the East US region to allow users access to the dashboard. However, you received a report that users can’t access the application.
Which of the following options allows users to access Grafana with the least amount of configuration?
A. Disable IP Restrictions.
B. Move the container app to the East US Region.
C. Configure ingress to generate a new endpoint.
D. Add a custom domain and certificate.
C. Configure ingress to generate a new endpoint.
Explanation:
With Azure Container Apps ingress, you can make your container application accessible to the public internet, VNET, or other container apps within your environment. This eliminates the need to create an Azure Load Balancer, public IP address, or any other Azure resources to handle incoming HTTPS requests. Each container app can have unique ingress configurations. For instance, one container app can be publicly accessible while another can only be reached within the Container Apps environment.
The problem with the given scenario is that users are accessing the public IP address even though the ingress setting is not enabled during the creation of the container app. When you configure the ingress and target port and then save it, the app will generate a new endpoint depending on the ingress traffic that you’ve selected. Now when you try to access the application URL, you will be redirected to the target port of the container image.
Hence, the correct answer is: Configure ingress to generate a new endpoint.
The option that says: Move the container app to the East US Region is incorrect because you can’t move a container app to a different Region.
The option that says: Disable IP Restrictions is incorrect because this won’t still help users access the Grafana app. Instead of denying traffic from source IPs, you only need to enable ingress and target port.
The option that says: Add a custom domain and certificate is incorrect because even though you added a custom domain name, you still won’t be able to access the application since additional configurations must be done to allow VNET-scope ingress. Therefore, the quickest way and least amount of configurations would be to enable ingress and get the application URL.
You have been tasked with replicating the current state of your resources in order to automate future deployments when a new feature needs to be added to the application.
Which of the following should you do?
A. Capture an image of a VM.
B. Use the resource group export template.
C. Redeploy and reapply a VM.
D. Create a VM with preset configurations.
B. Use the resource group export template.
Explanation:
Azure Resource Manager (ARM) templates is a service provided by Microsoft Azure that allows you to provision, manage, and delete Azure resources using declarative syntax. These templates can be used to deploy and manage resources such as virtual machines, storage accounts, and virtual networks in a consistent and reliable manner. To deploy the template, you can use the Azure Portal, Azure CLI, or Azure PowerShell.
In this scenario, you need to use ARM export templates to replicate the current state of our resources. This means that if you need to redeploy all your resources, you can just create a reusable template instead of going all over the manual creation of resources. You can export a service or a resource group. Based on the given requirements, we just need to capture all resources, then export resource group as template.
Hence, the correct answer is: Use the resource group export template.
The option that says: Capture an image of a VM is incorrect because this just creates a snapshot of the virtual machine configurations. Take note that you need to capture the current state of all resources. Therefore, export template will help us ease the creation of resources.
The option that says: Redeploy and reapply a VM is incorrect because redeploying the VM just migrates it to a new Azure host. While reapply is used to resolve the issues of a stuck failed state of the VM. Both features does not help us capture the current state of our resources.
The option that says: Create a VM with preset configurations is incorrect because this only helps you choose a VM based on your workload type and environment.
You have the following storage accounts in your Azure subscription.
mystorage1 | General purpose V1 | File
mystorage2 | BlobStorage| Blob
mystorage3 | General-purposeee V2 | File,Table
mystorage4 | General-purpose V2 | Queue
There is a requirement to export the data from your subscription using the Azure Import/Export service
Which Azure Storage account can you use to export the data?
A. mystorage2
B. mystorage1
C. mystorage3
D. mystorage4
A. mystorage2
Explanation:
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.
Consider using Azure Import/Export service when uploading or downloading data over the network is too slow, or getting additional network bandwidth is cost-prohibitive. Use this service in the following scenarios:
Data migration to the cloud: Move large amounts of data to Azure quickly and cost-effectively.
Content distribution: Quickly send data to your customer sites.
Backup: Take backups of your on-premises data to store in Azure Storage.
Data recovery: Recover a large amount of data stored in the storage and have it delivered to your on-premises location.
Azure Import/Export service allows data transfer into Azure Blobs and Azure Files by creating jobs. Use the Azure portal or Azure Resource Manager REST API to create jobs. Each job is associated with a single storage account. This service only supports export of Azure Blobs. Export of Azure files is not supported.
The jobs can be import or export jobs. An import job allows you to import data into Azure Blobs or Azure files, whereas the export job allows data to be exported from Azure Blobs. For an import job, you ship drives containing your data. When you create an export job, you ship empty drives to an Azure datacenter. In each case, you can ship up to 10 disk drives per job.
Hence, the correct answer is: mystorage2.
mystorage1 is incorrect because an export job does not support Azure Files. The Azure Import/Export service only supports export of Azure Blobs.
mystorage3 and mystorage4 are incorrect because the Queue and Table storage services are simply not supported by the Azure Import/Export service.
Your company has an Azure subscription that contains a storage account named tdstorageaccount1 and a virtual network named TDVNET1 with an address space of 192.168.0.0/16.
You have a user that needs to connect to the storage account from her workstation which has a public IP address of 131.107.1.23.
You need to ensure that the user is the only one who can access tdstorageaccount1.
Which two actions should you perform? Each correct answer presents part of the solution.
A. From the networking settings, enable TDVnet1 under Firewalls and virtual networks.
B. From the networking settings, select service endpoint under Firewalls and virtual networks.
C. From the networking settings, select “Allow trusted Microsoft services to access this storage account” under Firewalls and virtual networks.
D. Set the Allow access from field to Selected networks under the Firewalls and virtual networks blade of tdstorageaccount1.
E. Add the 131.107.1.23 IP address under Firewalls and virtual networks blade of tdstorageaccount1.
D. Set the Allow access from field to Selected networks under the Firewalls and virtual networks blade of tdstorageaccount1.
E. Add the 131.107.1.23 IP address under Firewalls and virtual networks blade of tdstorageaccount1.
Explanation:
An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS. Data in your Azure storage account is durable and highly available, secure, and massively scalable.
To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including Internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific VNets. You can also configure rules to grant access to traffic from selected public Internet IP address ranges, enabling connections from specific Internet or on-premises clients. This configuration enables you to build a secure network boundary for your applications.
To whitelist a public IP address, you must:
- Go to the storage account you want to secure.
- Select on the settings menu called Networking.
- Under Firewalls and virtual networks, select Selected networks.
- Under firewall, add the public IP address then save.
Hence, the following statements are correct:
– Set the Allow access from field to Selected networks under the Firewalls and virtual networks blade of tdstorageaccount1.
– Add the 131.107.1.23 IP address under Firewalls and virtual networks blade of tdstorageaccount1.
The statement that says: From the networking settings, add TDVnet1 under Firewalls and virtual networks is incorrect because adding TDVnet1 will not allow the user to connect to tdstorageaccount1. The requirement states that the workstation of the user must have access to tdstorageaccount1. The TDVnet1 virtual network doesn’t share the same network setting with tdstorageaccount1.
The statement that says: From the networking settings, select service endpoint under Firewalls and virtual networks is incorrect because it only allows you to create network rules that allow traffic only from selected VNets and subnets, which creates a secure network boundary for their data. Service endpoints only extend your VNet private address space and identity to the Azure services, over a direct connection.
The statement that says: From the networking settings, select Allow trusted Microsoft services to access this storage account under Firewalls and virtual networks is incorrect because this simply grants a subset of trusted Azure services access to the storage account, while maintaining network rules for other apps. These trusted services will then use strong authentication to securely connect to your storage account but won’t restrict access to a particular subnetwork or IP address.
Your company is currently hosting a mission-critical application in an Azure virtual machine that resides in a virtual network named TDVnet1. You plan to use Azure ExpressRoute to allow the web applications to connect to the on-premises network.
Due to compliance requirements, you need to ensure that in the event your ExpressRoute fails, the connectivity between TDVnet1 and your on-premises network will remain available.
The solution must utilize a site-to-site VPN between TDVnet1 and the on-premises network. The solution should also be cost-effective.
Which three actions should you implement? Each correct answer presents part of the solution.
A. Configure a local network gateway.
B. Configure a connection.
C. Configure a VPN gateway with Basic as its SKU.
D. Configure a gateway subnet.
E. Configure a VPN gateway with VpnGw1 as its SKU.
A. Configure a local network gateway.
B. Configure a connection.
E. Configure a VPN gateway with VpnGw1 as its SKU.
Explanation:
A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You can also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. Each virtual network can have only one VPN gateway. However, you can create multiple connections to the same VPN gateway. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.
A site-to-site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it.
Configuring Site-to-Site VPN and ExpressRoute coexisting connections has several advantages:
– You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute.
– Alternatively, you can use Site-to-Site VPNs to connect to sites that are not connected through ExpressRoute.
To create a site-to-site connection, you need to do the following:
– Provision a virtual network
– Provision a VPN gateway
– Provision a local network gateway
– Provision a VPN connection
– Verify the connection
– Connect to a virtual machine
Take note that since you have already deployed an ExpressRoute, you do not need to create a virtual network and gateway subnet as these are prerequisites in creating an ExpressRoute.
Hence, the correct answers are:
– Configure a VPN gateway with a VpnGw1 SKU.
– Configure a local network gateway.
– Configure a connection.
The option that says: Configure a gateway subnet is incorrect. As you already have an ExpressRoute connecting to your on-premises network, this means that a gateway subnet is already provisioned.
The option that says: Configure a VPN gateway with Basic as its SKU is incorrect. Although one of the requirements is to minimize costs, the coexisting connection for ExpressRoute and site-to-site VPN connection does not support a Basic SKU. The bare minimum for a coexisting connection is VpnGw1.