(AZ-204 topic) Develop Azure Compute Solutions Flashcards
Test takers will be expected to develop solutions using Azure Virtual Machines, Azure Containers Instances and Container Registry, in addition to know how to deploy web applications to Azure App Service and develop Azure Functions. Questions for this domain comprise 25% of the total questions for this exam. (43 cards)
Your company wants to move it’s website to Azure. You currently host your website on a docker container that see’s high-volume during peak business hours that causes CPU spikes and has been set-up to be able to failover to a local node should there be an issue. You’ve been instructed to shift the website to the cloud with as little change as possible while also keeping your webstire secured, resilient, and with costs at a minimum. Which situation would be the best solution?
- Upload your container to Azure Container Registry & deploy a new Azure Web App Service at the Standard tier with auto-scaling
- Upload your container to Azure Container Registry & deploy to Azure Container Instances with Auto-Scaling set up
- Build an Azure Virtual Machine that runs docker in a public facing virtual network. Move your container to the Virtual Machine and set your Virtual Machine to scale under high CPU conditions
- Build a Virtual Machine Scale Set that runs docker in a public facing virtual network. Configure your scale set to loadblance and increase the number of nodes based on CPU load.
-Upload your container to Azure Container Registry & deploy a new Azure Web App Service at the Standard tier with auto-scaling
Azure App Service would allow you to not only run, but also secure and offer high-availability for your web application with minimal effort when using the Standard Tier.
The ordering system for your company is getting an upgrade which will update a separate customer application whenever an order is completed. The order system processes at most 1000 orders per day, and the application is built using Azure Functions. What is the most efficient and economical way for the ordering system to notify the application when an order is complete?
- Use Cosmos DB for the data and use the built-in event notification service.
- Poll the order database from the application using a timer trigger to check if an order has been completed.
- Use an Azure Event Hub to collect and manage the order completion events. Then, build a pipeline to send the data to the application.
- Use a webhook to an Azure Function which can update the application as the order is completed.
-Use a webhook to an Azure Function which can update the application as the order is completed.
Webhooks are great for passive events, where you don’t know when the event might happen. No polling is necessary, and as such, it is efficient and “cheap”.
Your company has developed an application that needs to be able to accept, store, and process images. This application utilizes Azure App Services to host the web app, utilizes OAuth for authentication, and uses a General-Purpose v2 Blob Storage account for the images. You’ve been asked to ensure images uploaded are processed to create a better experience for people viewing from mobile devices by converting them to more manageable sizes & formats. The process should only run when new images are uploaded or updated. What is the best method to achieve this result?
- Use Azure Storage Blob Compression to process images as they come in
- Create a Function that is triggered whenever an upload request comes through the webapp, catching the image before it lands in the Blob Storage Account
- Build an Event Hub trigger event that kicks off a Function that will process the images.
- Build an Azure Function that uses a Blob Storage Trigger for any changes and runs whenever it detects new images or when an image is updated.
-Build an Azure Function that uses a Blob Storage Trigger for any changes and runs whenever it detects new images or when an image is updated.
Polling the blob container for updates creates a simple and efficient method of triggering your function.
Which framework does Azure Durable Functions use?
- Azure Serverless Framework
- .NET Framework
- Azure API Management
- Durable Task Framework
-Durable Task Framework
Durable functions are built on the Durable Task Framework, which provides orchestration, event storming and event sourcing.
Which scenario is best suited for using Azure Container Instances to host your application?
- A legacy application that needs to run on a specific version of Windows Server.
- An application that is expected to scale and grow rapidly.
- An application that is being tested for a small user group in a single region.
- An application that requires native TLS support for the public Internet.
-An application that is being tested for a small user group in a single region.
ACI provides a group of instances, which is a collection of containers that get scheduled on the same host machine. The containers in a container group share a lifecycle, local network, and storage volumes.
Your company has asked you to make an update to one of your ARM Templates that deploys an environment that follows your security compliance standards. You’ve been tasked with updating the SKU that can be used for your Virtual Machines to include Standard_D1_v2 to Standard_E2_v3. Under which Element can you accomplish this?
- Under the “Functions” element
- Under the “Variables” element
- Under the “Parameters” element
- Under the Outputs Element
-Under the “Parameters” element
Parameters is where you can set an array of values that are allowed to be used with your deployments.
When creating a container registry, what Azure CLI command can be used to initiate the process?
- az registry create
- az registry new
- az acr new
- az acr create
-az acr create
This will create a new Azure Container Registry.
How would you retrieve the ARM template for an existing service in Azure, in order to reuse and automate it?
- Lodge a support ticket with Azure Support to have the template generated. This requires a Standard support plan or higher.
- Use the PowerShell cmdlet Export-AzARMTemplate.
- ARM templates can only be retrieved when the resource is created.
- In the Azure Portal use the “Export Template” option.
-In the Azure Portal use the “Export Template” option.
In the “Automation” section you can export the ARM template to exactly duplicate the resource.
There have been some concerns in your company about the security of the Azure Web Apps you are using. Your development manager has asked you to ensure traffic to and from the Web Apps is secure. What is the best way to do this?
- Use a system-assigned managed identity to hide any credentials passing through the network.
- Install an SSL certificate on the App Service itself to encrypt all the web traffic.
- Use Azure Key Vault to protect the database connection and encrypt the certificate credentials.
- Use an App Service deployment slot to redirect traffic through a secure zone.
-Install an SSL certificate on the App Service itself to encrypt all the web traffic.
An SSL certificate is used to encrypt the data passing over the Internet. They can ensure that traffic to and from a Web App is secure.
Which of these is not a required element on an ARM template?
- resources
- $schema
- variables
- contentVersion
-variables
You don’t have to include variables in an ARM template. It is okay, but not very dynamic, to specify everything directly.
You have been asked to deploy a static text file asynchronously to a Web App called acg204 in resource group wizardsRG. Fill in the two missing value in the Azure CLI command.
az webapp _______ –resource-group wizardsRG –name acg204 –src-path SourcePath –type static –async _______
- deploy, true
- deploy, IsAsync
- deployment, IsAsync
- deployment, true
-deploy, true
‘deploy’ deploys a provided artifact to Azure Web Apps. Valid values for –async are ‘true’ and ‘false’.
Which properties do you get when using a Windows Azure Container Instance (ACI) for your application? (Choose 3.)
- A public IP address
- Virtual Network deployment.
- Fully qualified domain name (FQDN)
- Access to the virtual machine running the container
- Greater security for customer data
- Integration with Docker Hub and Azure container registry .
-A public IP address
This is the IP address you can access the container on over the Internet.
-Fully qualified domain name (FQDN)
Your container will get a default FQDN, or you can set up your own using DNS.
-Integration with Docker Hub and Azure container registry .
You can create container instances directly from Docker Hub or ACR. Neat.
Which of the following are valid Azure Function Triggers? (Choose 3.)
- IoT
- HTTP
- Service Bus
- Webhook
- App Service
- JavaScript
-HTTP
A HTTP Trigger is a basic and simple trigger for your Azure Function.
-Service Bus
Use the Service Bus trigger to respond to messages from a Service Bus queue or topic. I like buses.
-Webhook
If an external system supports webhooks, it can be configured to call an Azure Functions Webhook using HTTP and pass on relevant data.
Which of the following languages is NOT supported by Durable Functions?
- PowerShell
- Java
- Javascript
- F#
-Java
Currently, Durable Functions only supports C#, JavaScript, Python, F#, and PowerShell. More languages will be supported over time, but it currently does not support Java.
The new infrastructure you are designing for your new airbending service is using Azure Functions as part of the architecture. The functions will work in conjunction with an App Service hosted on an App Service Plan that runs close to computing capacity. The functions are expected to have minimum use, as they perform a critical but infrequent maintenance task. What is the most cost effective service to host these Functions on?
- Create a new App Service Plan only for the Functions.
- Use a consumption model.
- Scale up the existing App Service Plan and use that.
- Use the existing App Service Plan.
-Use a consumption model.
The first 1 million function requests every month are free on the consumption model.
You are the administrator of the Nutex Corporation. You have created an Azure function in Visual Studio and have uploaded the function to Azure. You want to use the recommended method to monitor the execution of your function app.
Which Azure resource do you have to create after publishing the function app with Visual Studio?
- System Center Operations Manager
- Azure Monitor
- Azure Service Bus
- Application Insights
-Application Insights
You would create the Application Insights resource because the recommended way to monitor the execution of your functions is by integrating your function app with Azure Application Insights. Integrating your function app with Azure
Application Insights is done automatically. When you create your function app during Visual Studio publishing, the integration of your function in Azure is not complete. You need to enable the Application Insights integration manually after publishing the function app.
You would not choose Azure Monitor because this is not the recommended way to monitor the execution of function apps.
You would not choose System Center Operations Manager because it is not used primarily for Azure function apps. Instead, it is an overall monitoring solution.
You would not choose Azure Service Bus because this is a messaging service and not usable for application
monitoring.
Lana has been asked to deploy a complex solution in Azure involving multiple VMs running various custom service applications. She has been asked to do this at least three times because Test, Development, and Production environments are required. The Test and Development solutions will need to be able to be destroyed and recreated regularly, incorporating new data from production each time. The Nutex system administration team is already using Ansible internally to accomplish deployments.
What will Lana need to do to get things started in Azure?
- On each Ansible managed node, Lana will need to install Azure Dependencies using pip.
- Using the Azure Cloud Shell, Lana needs to install the Ansible modules.
- On the Ansible control machine, Lana will need to install Azure Resource Manager modules using pip.
- Working in a local Windows Powershell, Lana will need to install the Ansible modules.
-On the Ansible control machine, Lana will need to install Azure Resource Manager modules using pip.
Lana needs to install Azure Resource Manager modules on the Ansible control machine using pip. The Ansible control machine will need the Azure Resource Manager modules to appropriately communicate with Azure. Using pip allows for an easier managment of Python modules.
She would not use the Azure Cloud Shell to install the Ansible Modules. This is not necessary because the Ansible Modules are already installed in the Azure Cloud Shell.
She would not use a local Windows Powershell to install the Ansible modules. This is not currently possible because the Ansible Control Machine cannot currently run on a Windows PC, and therefore cannot be managed with a Windows Powershell.
She would not install Azure Dependencies on each Ansible managed node using pip. The managed nodes do not need any Azure Dependencies installed, that is one of the biggest selling points! They only require a Python install and SSH access.
You have a Kubernetes cluster in AKS and you deployed an app called MyApp. You increase the number of nodes from one to three in the Kubernetes cluster by using the following command:
______ –resource-group=myResourceGroup –name=myAKSCluster –nodecount 3
The output of the command is as follows:
“agentPoolProfiles”: [
{
“count”: 3,
“dnsPrefix”: null,
“fqdn”: null,
“name”: “myAKSCluster”,
“osDiskSizeGb”: null,
“osType”: “Linux”,
“ports”: null,
“storageProfile”: “ManagedDisks”,
“vmSize”: “Standard_D2_v2”,
“vnetSubnetId”: null
}
Fill in the missing part of the command.
Acceptable answer(s) for field 1:
az aks scale
You would type the az aks scale command. This command is used to scale the node pool in a Kubernetes cluster.
The –name parameter specifies the name of the cluster. The –resource group specifies the name of the
resource group. The –node parameter specifies the number of nodes in the pool.
You are the administrator of the Nutex Corporation. You want to build images based on Linux and Windows for your Azure solutions.
Which Azure services can you use? (Choose all that apply.)
- ImageX
- Azure Kubernetes Service
- Azure Pipelines
- Azure Container Registry tasks
- Azure Pipelines
- Azure Container Registry tasks
You can use Azure Container Registry tasks and Azure Pipelines.
Azure Container Registry tasks allows you to build on-demand docker container images in the cloud.
Azure Pipelines allow you to implement a pipeline for building, testing, and deploying an app. The Azure Pipelines service allows you to build images for any repository containing a dockerfile.
You would not choose the Azure Kubernetes Service, because this service is there to manage container images for solutions and not to build images.
You would not choose the ImageX utility, because this is not an Azure service and cannot be used to create container images. ImageX allows you to capture an image of a hard drive in a Windows Imaging Format (WIM) file.
You are the administrator of the Nutex Corporation. You want to deploy some virtual machines through an ARM template. You need a virtual network named VNET1 with a subnet named Subnet1, which has to be defined as a child resource. For that you have to define the corresponding JSON template.
Choose one of the following possibilities to complete the following ARM template (SEE IMAGE).
- dependsOn
- parametersLink
- location
- originHostHeader
dependsOn
You would use the dependsOn element because the child resource, which is the subnet that is marked as dependent on the parent, is the VNet resource. The parent resource must exist before the child resource can be deployed.
You would not use the originHostHeader element, because this element is used as a reference function to enable an expression. This expression derives its value from another JSON name or runtime resources.
You would not use the parametersLink element, because you use this element to link an external parameter file.
You would not use the location element, because this element defines the geographical location.
You are the administrator of the Nutex Corporation. You want to create a new Azure Windows VM named VMNutex in a resource group named RG1 using Visual Studio and C#.
The virtual machine needs to be a member of an availability set and needs to be accessible through the network.
What steps should you perform?
- Create a Visual Studio project.
- Type Install-Package Microsoft.Azure.Management.Fluent in the Package Manager Console.
- Create the azureauth.properties file.
- Create the management client.
using Microsoft.Azure.Management.Compute.Fluent;
using Microsoft.Azure.Management.Compute.Fluent.Models;
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent.Core;
To complete the management client creation, you would add the following code to the Main method:
*var credentials = SdkContext.AzureCredentialsFactory .FromFile(Environment.GetEnvironmentVariable("AZURE\_AUTH\_LOCATION"));*
var azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials)
.WithDefaultSubscription();
- create the resource group
* “var groupName = “RG1”; var vmName = “VMNutex”; var location = Region.USWest; Console.WriteLine(“Creating resource group…”); var resourceGroup = azure.ResourceGroups.Define(groupName).WithRegion(location).Create();* - create an availability set
Console.WriteLine(“Creating availability set…”); var availabilitySet = azure.AvailabilitySets.Define(“myAVSet”).WithRegion(location)
.WithExistingResourceGroup(groupName).WithSku(AvailabilitySetSkuTypes.Managed).Create(); - add the code to create the public IP address, the virtual network, and the network interface.
- create the virtual machine
azure.VirtualMachines.Define(vmName).WithRegion(location)
.WithExistingResourceGroup(groupName).WithExistingPrimaryNetworkInterface(networkInterface)
.WithLatestWindowsImage(“MicrosoftWindowsServer”, “WindowsServer”, “2012-R2-Datacenter”)
.WithAdminUsername(“AtlFalcon”).WithAdminPassword(“Ih8DaN0S8ntZ”)
.WithComputerName(vmName).WithExistingAvailabilitySet(availabilitySet)
.WithSize(VirtualMachineSizeTypes.StandardDS1).Create(); - Run the application
First, you need to create a Visual Studio project. You will then install the NuGet package so that you can add the additional libraries that you need in Visual Studio. You would choose Tools > Nuget Package Manager. From the Package Manager Console, you would type Install-Package Microsoft.Azure.Management.Fluent.
You would then create the azureauth.properties file. This file ensures that you have access to an AD service principal and you can do that through the authorization properties in the azureauth.properties file.
You would then create the management client. This can be done by opening the Program.cs file of the project and adding the following statements to the top of the file:
using Microsoft.Azure.Management.Compute.Fluent;
using Microsoft.Azure.Management.Compute.Fluent.Models;
using Microsoft.Azure.Management.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent;
using Microsoft.Azure.Management.ResourceManager.Fluent.Core;
To complete the management client creation, you would add the following code to the Main method:
*var credentials = SdkContext.AzureCredentialsFactory .FromFile(Environment.GetEnvironmentVariable("AZURE\_AUTH\_LOCATION"));*
var azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials)
.WithDefaultSubscription();
Then you need to create the resource group since all resources must be contained in the resource group. You can add the following code to the Main method to create the resource group:
“var groupName = “RG1”; var vmName = “VMNutex”; var location = Region.USWest; Console.WriteLine(“Creating resource group…”); var resourceGroup = azure.ResourceGroups.Define(groupName).WithRegion(location).Create();
You will then need to create an availability set because an availability set allows you to maintain virtual machines that are used by your applications. You can create the availability set by adding the following code to the Main method:
*Console.WriteLine("Creating availability set..."); var availabilitySet = azure.AvailabilitySets.Define("myAVSet").WithRegion(location) .WithExistingResourceGroup(groupName).WithSku(AvailabilitySetSkuTypes.Managed).Create();*
Then you need to add the code to create the public IP address, the virtual network, and the network interface. The virtual machine needs a public IP to communicate with the virtual machine. A virtual machine must be in a subnet of the virtual network and has to have a network interface to communicate on the virtual network.
You will then create the virtual machine. You can create the virtual machine by adding the following code to the Main method:
azure.VirtualMachines.Define(vmName).WithRegion(location)
.WithExistingResourceGroup(groupName).WithExistingPrimaryNetworkInterface(networkInterface)
.WithLatestWindowsImage(“MicrosoftWindowsServer”, “WindowsServer”, “2012-R2-Datacenter”)
.WithAdminUsername(“AtlFalcon”).WithAdminPassword(“Ih8DaN0S8ntZ”)
.WithComputerName(vmName).WithExistingAvailabilitySet(availabilitySet)
.WithSize(VirtualMachineSizeTypes.StandardDS1).Create();
Then you will run the application. To run the code in Visual Studio, you have to run the application.
You are the administrator of the Nutex Corporation. You have created different Azure functions. You have to decide which kind of input and output binding you have to choose for which type of function. The trigger causes the function to run. The input and output bindings to the function connect another resource to the function.
Scenario 2:
A scheduled job reads Blob Storage contents and creates a new Cosmos DB document.
What are the relevant Trigger, Input binding and Output binding to use in this scenario?
List of bindings/triggers:
- Queue
- Timer
- Event Grid
- HTTP
- None
- Blob Storage
- Cosmos DB
- SendGrid
- Microsoft Graph
Trigger: timer
Input Binding: Blob Storage
Output Binding: Cosmos DB
Because a scheduled job reads Blob Storage contents and creates a new document, your scheduled job is time-based. Therefore, you need a Timer trigger. To make it possible to read from a blob storage, you need the blob storage input binding, and to create a new Cosmos DB document you need the Cosmos DB outbound binding.
The function trigger is not HTTP because, in this scenario, no HTTP request has been received. The function trigger is not Event Grid because this function does not have to respond to an event sent to an event grid topic. The function trigger is not Queue because this function is not based on another queue.
The function cannot have an input binding with None because it must be based on Blob Storage content. The function cannot use Cosmos DB as the input binding because it must read blob storage content and not content from Cosmos DB.
The function cannot have an output binding with Queue because it must create a new Cosmos DB document. The function cannot have an output binding with SendGrid because it cannot send an email. The function cannot have an output binding with Microsoft Graph because in this scenario you do not want to have an Excel spreadsheet, OneDrive, or Outlook as output.
You are the administrator of the Nutex Corporation. You have created an Azure function app with Visual Studio. You want to upload the required settings to your function app in Azure. For that, you use the Manage Application Settings link. However, the Application Settings dialog is not working as you expected.
What is a possible solution for this?
- manually create the host.json file in the project root.
- manually create the local.settings.json file in the project root.
- manually create an Azure storage account.
- install the Azure storage emulator.
- manually create the local.settings.json file in the project root.
You would manually create the local.settings.json file in the project root because by default the local.settings.json file is not checked into the source control. When you clone a local functions project from source control, the project does not have a local.settings.json file. In this case you need to manually create the local.settings.json file in the project root so that the Application Settings dialog will work as expected.
You would not manually create the host.json file in the project root because this file lets you configure the functions host. The settings apply both when running locally and in Azure.
You would not manually create an Azure storage account because, although Azure functions require a storage account, it is automatically created so you not have to create it manually.
You would not install the Azure storage emulator because with that you cannot upload the required settings.
You are the administrator of the Nutex Corporation. You have created an Azure function in Visual Studio and have uploaded the function to Azure. You want to use the recommended method to monitor the execution of your function app.
Which Azure resource do you have to create after publishing the function app with Visual Studio?
- Azure Monitor
- Azure Service Bus
- Application Insights
- System Center Opterations Manager
-Application Insights
You would create the Application Insights resource because the recommended way to monitor the execution of your functions is by integrating your function app with Azure Application Insights. Integrating your function app with Azure Application Insights is done automatically. When you create your function app during Visual Studio publishing, the integration of your function in Azure is not complete. You need to enable the Application Insights integration manually after publishing the function app.
You would not choose Azure Monitor because this is not the recommended way to monitor the execution of function apps.
You would not choose System Center Operations Manager because it is not used primarily for Azure function apps. Instead, it is an overall monitoring solution.
You would not choose Azure Service Bus because this is a messaging service and not usable for application monitoring.