az-104 dumps topic 3, 1-? Flashcards

(82 cards)

1
Q

You have an Azure subscription named Subscription1 that contains the storage accounts shown in the following table:
Name Account kind Azure service that contains data
storage1 Storage File
storage2 Storage V2 (general purpose v2) File, Table
storage3 Storage V2 (general purpose v2) Queue
storage4 BlobStorage Blob
You plan to use the Azure Import/Export service to export data from Subscription1.
You need to identify which storage account can be used to export the data.
What should you identify?
A. storage1
B. storage2
C. storage3
D. storage4

A

D. storage4

Azure Import/Export service supports the following of storage accounts:
✑ Standard General Purpose v2 storage accounts (recommended for most scenarios)
✑ Blob Storage accounts
✑ General Purpose v1 storage accounts (both Classic or Azure Resource Manager deployments),
Azure Import/Export service supports the following storage types:
✑ Import supports Azure Blob storage and Azure File storage
✑ Export supports Azure Blob storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have Azure Storage accounts as shown in the following exhibit.

Name Type Kind Resourse group Location Subs Access tier Replication
storageaccount1 Stor. ac. Storage ContosoRG1 East US Subscription1 - Read-access ge…
storageaccount2 Stor. ac. StorageV2 ContosoRG1 Central US Subscription1 Hot Geo-redundant…
storageaccount3 Stor. ac. BlobStorage ContosoRG1 East US Subscription1 Hot Locally-redundant…
Drop-downs:
You can use [answer choice] for Azure Table Storage.
storageaccount1 only
storageaccount2 only
storageaccount3 only
storageaccount1 and storageaccount2 only
storageaccount2 and storageaccount3 only
You can use [answer choice] for Azure Blob storage.
storageaccount3 only
storageaccount2 and storageaccount3 only
storageaccount1 and storageaccount3 only
all the storage accounts

A

You can use [storageaccount1 and storageaccount2 only] for Azure Table Storage.

You can use [all the storage accounts] for Azure Blob storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You have Azure subscription that includes data in following locations:
Name Туре
container1 Blob container
share1 Azure files share
DB1 SQL database
Table1 Azure Table
You plan to export data by using Azure import/export job named Export1.
You need to identify the data that can be exported by using Export1.
Which data should you identify?
A. DB1
B. container1
C. share1
D. Table1

A

B. container1

Blobs are only type of storage which can be exported.
1. Import and export support for blob storage.
2. Only import support for File storage but export not support. check the table of Supported storage types
https://learn.microsoft.com/en-us/azure/import-export/storage-import-export-requirements#supported-storage-types

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have an Azure Storage account named storage1.
You have an Azure App Service app named App1 and an app named App2 that runs in an Azure container instance. Each app uses a managed identity.
You need to ensure that App1 and App2 can read blobs from storage1. The solution must meet the following requirements:
✑ Minimize the number of secrets used.
✑ Ensure that App2 can only read from storage1 for the next 30 days.
What should you configure in storage1 for each app?
App1:
App2:
Access keys
Advanced security
Access control (IAM)
Shared access signatures (SAS)

A

App1:
Access Control (IAM)
App2:
Shared access signatures (SAS)

  1. Since the App1 uses Managed Identity, App1 can access the Storage Account via IAM. As per requirement, we need to minimize the number of secrets used, so Access keys is not ideal. https://learn.microsoft.com/en-us/azure/app-service/scenario-secure-app-access-storage?tabs=azure-portal#grant-access-to-the-storage-account
  2. We need temp access for App2, so we need to use SAS.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You need to create an Azure Storage account that meets the following requirements:
✑ Minimizes costs
✑ Supports hot, cool, and archive blob tiers
✑ Provides fault tolerance if a disaster affects the Azure region where the account resides
How should you complete the command?

az storage account create -g RG1 -n storageaccount1
–kind [ … ]
File Storage
Storage
StorageV2
–sku [ … ]
Standard_GRS
Standard_LRS
Standard_RAGRS
Premium_LRS

A

–kind [ … ]
StorageV2
–sku [ … ]
Standard_GRS

“Note: General-purpose v1 accounts don’t have access to Hot, Cool, or Archive tiered storage. For access to tiered storage, upgrade to a general-purpose v2 account.”
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-grs

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You have an Azure subscription that contains the resources in the following table.
Name Type
RG1 Resource group
store1 Azure Storage account
Sync1 Azure File Sync

Store1 contains a file share named data. Data contains 5,000 files.
You need to synchronize the files in the file share named data to an on-premises server named Server1.
Which three actions should you perform?

A. Create a container instance
B. Register Server1
C. Install the Azure File Sync agent on Server1
D. Download an automation script
E. Create a sync group

A

B. Register Server1
C. Install the Azure File Sync agent on Server1
E. Create a sync group

Step 1 (C): Install the Azure File Sync agent on Server1
The Azure File Sync agent is a downloadable package that enables Windows Server to be synced with an Azure file share
Step 2 (B): Register Server1.
Register Windows Server with Storage Sync Service
Registering your Windows Server with a Storage Sync Service establishes a trust relationship between your server (or cluster) and the Storage Sync Service.
Step 3 (E): Create a sync group and a cloud endpoint.
A sync group defines the sync topology for a set of files. Endpoints within a sync group are kept in sync with each other. A sync group must contain one cloud endpoint, which represents an Azure file share and one or more server endpoints. A server endpoint represents a path on registered server.
Reference:
https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have an Azure subscription that contains the resources shown in the following table.
Name Type Resource group
VNET1 Virtual network RG1
VNET2 Virtual network RG2
VM1 Virtual machine RG2

The status of VM1 is Running.
You assign an Azure policy as shown in the exhibit. (Click the Exhibit tab.)

Home > Policy - Assignments > Assign Policy
Assign Policy
Scope
Scope : Azure Pass/RG2
Exclusions : -
Basics
Policy definition : Not allowed resource types
Assignment name : Not allowed resource types
Description : -
Assigned by : First User
PARAMETERS
*Not allowed resource types✪
3 selected

You assign the policy by using the following parameters:
Microsoft.ClassicNetwork/virtualNetworks
Microsoft.Network/virtualNetworks
Microsoft.Compute/virtualMachines

Yes/No:
An administrator can move VNET1 to RG2
The state of VM1 changed to deallocated
An administrator can modify the address space of VNET2

A

An administrator can move VNET1 to RG2 - No
The state of VM1 changed to deallocated - No
An administrator can modify the address space of VNET2 - No

Policy will identify the VM as not compliant but will not put VM in deallocate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You have an Azure subscription that contains a storage account.
You have an on-premises server named Server1 that runs Windows Server 2016. Server1 has 2 TB of data.
You need to transfer the data to the storage account by using the Azure Import/Export service.
In which order should you perform the actions?
From the Azure portal, update the import job
From the Azure portal, create an import job
Attach an external disk to Server1 and then run waimportexport.exe
Detach the external disks from Server1 and ship the disks to an Azure data center

A

Step 1: Prepare the drives (Attach an external disk to Server1 and then run waimportexport.exe)
Step 2: Create an import job (From the Azure portal, create an import job)
Step 3: Ship the drives to the Azure datacenter (Detach the external disks from Server1 and ship the disks to an Azure data center)
Step 4: Update the job with tracking information (From the Azure portal, update the import job)

Reference:
https://learn.microsoft.com/en-us/azure/import-export/storage-import-export-service#inside-an-import-job

https://learn.microsoft.com/en-us/azure/import-export/storage-import-export-data-to-files?tabs=azure-portal-preview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You have Azure subscription that includes following Azure file shares:
Name In storage account Location
share1 storage1 West US
share2 storage1 West US

You have the following on-premises servers:
Name Folders
Server1 D:\Folder1, E:\Folder2
Server2 D:\Data

You create a Storage Sync Service named Sync1 and an Azure File Sync group named Group1. Group1 uses share1 as a cloud endpoint.
You register Server1 and Server2 in Sync1. You add D:\Folder1 on Server1 as a server endpoint of Group1.

Yes/No:
share2 can be added as a cloud endpoint for Group1
E:\Folder2 on Server1 can be added as a server endpoint for Group1
D:\Data on Server2 can be added as a server endpoint for Group1

A

share2 can be added as a cloud endpoint for Group1 - No
A sync group contains one cloud endpoint, or Azure file share, and at least one server endpoint.

E:\Folder2 on Server1 can be added as a server endpoint for Group1 - No
Azure File Sync does not support more than one server endpoint from the same server in the same Sync Group.

D:\Data on Server2 can be added as a server endpoint for Group1 - Yes
Multiple server endpoints can exist on the same volume if their namespaces are not overlapping (for example, F:\sync1 and F:\sync2) and each endpoint is syncing to a unique sync group.

Reference:

https://docs.microsoft.com/en-us/answers/questions/110822/azure-file-sync-multiple-sync-directories-for-same.html
https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have an Azure subscription named Subscription1.
You create an Azure Storage account named contosostorage, and then you create a file share named data.
Which UNC path should you include in a script that references files from the data file share?
- blob
- blob.core.windows.net
- contosostorage
- data
- file
- file.core.windows.net
- portal.azure.com
- subscription1
Answer Area
\ [ … ] . [ … ] \ [ … ]

A

[storageaccountname].file.core.windows.net/[FileShareName]

contosostorage.file.core.windows.net\data

Reference:

https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have an Azure subscription that contains an Azure Storage account.
You plan to copy an on-premises virtual machine image to a container named vmimages.
You need to create the container for the planned image.
Which command should you run?

azcopy
- make
- sync
- copy
‘https://mystorageaccount.[ … ].core.windows.net/vmimages’
- blob
- dfs
- queue
- table
- images
- file

A

azcopy make
‘https://mystorageaccount.[blob].core.windows.net/vmimages’

Similar to OS Images, a VM Image is a collection of metadata and pointers to a set of VHDs (one VHD per disk) stored as page blobs in Azure Storage.

Reference:
https://docs.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-make

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have an Azure File sync group that has the endpoints shown in the following table.

Name Туре
Endpoint1 Cloud endpoint
Endpoint2 Server endpoint
Endpoint3 Server endpoint

Cloud tiering is enabled for Endpoint3.
You add a file named File1 to Endpoint1 and a file named File2 to Endpoint2.
On which endpoints will File1 and File2 be available within 24 hours of adding the files?

File 1:
Endpoint1 only
Endpoint3 only
Endpoint2 and Endpoint3 only
Endpoint1, Endpoint2, and Endpoint3
File2:
Endpoint2 only
Endpoint3 only
Endpoint2 and Endpoint3 only
Endpoint1, Endpoint2, and Endpoint3

A

File1: Endpoint1 only
It is a cloud endpoint, and it is scanned by the detection job every 24 hours.

File2: Endpoint1, Endpoint2 and Endpoint3
With the on-premises servers the file is scanned and synced automatically after it’s being added.

Note: They changed the question in Exam from “within 24 hours” to “after 24 hours”.
So, the answer is:
File1: Endpoint1, Endpoint2 and Endpoint3
File2: Endpoint1, Endpoint2 and Endpoint3

Reference:

https://docs.microsoft.com/en-us/learn/modules/extend-share-capacity-with-azure-file-sync/2-what-azure-file-sync

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have several Azure virtual machines on a virtual network named VNet1.
You configure an Azure Storage account as shown in the following exhibit.
Allow access from
[ ] All networks [X] Selected networks
Configure network security for your storage accounts.
Virtual networks
Secure your storage account with virtual networks.
VIRTUAL NET… SUBNET ADDRESS RA… ENDPOINT ST… RESOURCE G… SUBSCRIPTION
VNet1 1 10.2.0.0/16 - DemoRG Production subscrip…
Prod 10.2.0.0/24 Enabled DemoRG Production subscrip…
Firewall
Add IP ranges to allow access from the Internet or your on- premises networks.
ADDRESS RANGE : IP address or CIDR …
Exceptions
[ ] Allow trusted Microsoft services to access this storage account
[ ] Allow read access to storage logging from any network
[ ] Allow read access to storage metrics from any network
Drop-downs:
The virtual machines on the 10.2.9.0/24 subnet will have network connectivity to the file shares in the storage account [answer choice].
Azure Backup will be able to back up the unmanaged hard disks of the virtual machines in the storage account [answer choice].
- always
- during a backup
- never

A
  • never

VNet1’s address space is 10.2.0.0/16.
The VNet1 has only 1 Subnet associated: 10.2.0.0/24. The address space of a VNet is irrelevant if there isn’t a corresponding Subnet from, which VMs can be assigned IP addresses.

Box1: Never
VMs from 10.2.9.0/24 (10.2.9.0 - 10.2.9.255) are out of Subnet.
Subnet IP range 10.2.0.0 - 10.2.0. 255.

Box2: Never
Since the checkbox to allow trusted Microsoft services is not checked. After you configure firewall and virtual network settings for your storage account, select Allow trusted Microsoft services to access this storage account as an exception to enable Azure Backup service to access the network restricted storage account.

VMs from the 10.2.9.0/24 should NEVER access the storage!!!!!
Since wich the selection of the network is segmented by subnets, and not by virtual networks. The virtual machine attached to the following virtual network 10.2.9.0/24 will never have access to the storage account, because of the firewall rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You have a sync group named Sync1 that has a cloud endpoint. The cloud endpoint includes a file named File1.txt.
Your on-premises network contains servers that run Windows Server 2016. The servers are configured as shown in the following table.

Name Share Share contents
Server1 Share1 File1.txt, File2.txt
Server2 Share2 File2.txt, File3.txt

You add Share1 as an endpoint for Sync1. One hour later, you add Share2 as an endpoint for Sync1.
Yes/No

  • On the cloud endpoint, File1.txt is overwritten by File1.txt from Share1.
  • On Server1, File1.txt is overwritten by File1.txt from the cloud endpoint.
  • File1.txt from Share1 replicates to Share2.
A

Discussion says NO - On the cloud endpoint, File1.txt is overwritten by File1.txt from Share1. (yes)
Files are never overwritten. If the file exists, it will get a new name on the endpoint (file1(1).txt)
No - On Server1, File1.txt is overwritten by File1.txt from the cloud endpoint.
Yes - File1.txt from Share1 replicates to Share2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You have an Azure subscription that contains the storage accounts shown in the following table.
Name, Kind, Performance, Replication, Access tier
- storage1, Storage (general purpose v1), Premium, Geo-redundant storage (GRS), None
- storage2, StorageV2 (general purpose v2), Standard, Locally-redundant storage (LRS), Cool
- storage3, StorageV2 (general purpose v2), Premium, Read-access geo-redundant storage (RA-GRS), Hot
- storage4, BlobStorage, Standard, Locally-redundant storage (LRS), Hot
You need to identify which storage account can be converted to zone-redundant storage (ZRS) replication by requesting a live migration from Azure support.
What should you identify?
A. storage1
B. storage2
C. storage3
D. storage4

A

B. storage2

Answer is correct. It is storage2.
The key to the answer in this question is “Live migration”
- You can do Live migration to ZRS from LRS or GRS only.
- Also this only applies on General Purpose v2 storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have an Azure subscription that contains a storage account named account1.
You plan to upload the disk files of a virtual machine to account1 from your on-premises network. The on-premises network uses a public IP address space of 131.107.1.0/24.
You plan to use the disk files to provision an Azure virtual machine named VM1. VM1 will be attached to a virtual network named VNet1. VNet1 uses an IP address space of 192.168.0.0/24.
You need to configure account1 to meet the following requirements:
✑ Ensure that you can upload the disk files to account1.
✑ Ensure that you can attach the disks to VM1.
✑ Prevent all other access to account1.
Which two actions should you perform?
A. From the Networking blade of account1, select Selected networks.
B. From the Networking blade of account1, select Allow trusted Microsoft services to access this storage account.
C. From the Networking blade of account1, add the 131.107.1.0/24 IP address range.
D. From the Networking blade of account1, add VNet1.
E. From the Service endpoints blade of VNet1, add a service endpoint.

A

A, C are the only possible combination to answer this question.

For other options:
- B, theres no need to involve Microsoft trusted services here.
- D, that only works if there is a site-to-site VPN, and that is NOT stated in the problem.
- E, theres nothing to do with the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You have an on-premises file server named Server1 that runs Windows Server 2016.
You have an Azure subscription that contains an Azure file share.
You deploy an Azure File Sync Storage Sync Service, and you create a sync group.
You need to synchronize files from Server1 to Azure.
Which three actions should you perform in sequence?
- Install the Azure File Sync agent on Server1
- Create an Azure on-premises data gateway
- Create a Recovery Services vault
- Register Server1
- Add a server endpoint
- Install the DFS Replication server role
on Server1

A

Install the Azure File Sync agent on Server1
Register Server1
Add a server endpoint

Step 1: Install the Azure File Sync agent on Server1
The Azure File Sync agent is a downloadable package that enables Windows Server to be synced with an Azure file share

Step 2: Register Server1
Register Windows Server with Storage Sync Service
Registering your Windows Server with a Storage Sync Service establishes a trust relationship between your server (or cluster) and the Storage Sync Service.

Step 3: Add a server endpoint
Create a sync group and a cloud endpoint.
A sync group defines the sync topology for a set of files. Endpoints within a sync group are kept in sync with each other. A sync group must contain one cloud endpoint, which represents an Azure file share and one or more server endpoints. A server endpoint represents a path on registered server.

Reference:
https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-deployment-guide

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You plan to create an Azure Storage account in the Azure region of East US 2.
You need to create a storage account that meets the following requirements:
✑ Replicates synchronously.
✑ Remains available if a single data center in the region fails.
How should you configure the storage account?

Replication:
Geo-redundant storage (GRS)
Locally-redundant storage (LRS)
Read-access geo-redundant storage (RA GRS)
Zone-redundant storage (ZRS)
Account type:
Blob storage
Storage (general purpose v1)
StorageV2 (general purpose v2)

A

Replication:
Geo-redundant storage (GRS)
Account type:
StorageV2 (general purpose v2)

Box 1: Zone-redundant storage (ZRS)
Zone-redundant storage (ZRS) replicates your data synchronously across three storage clusters in a single Region.
GRS protects against Zone failure, while ZRS protects against data center failure.
LRS would not remain available if a data center in the region fails.
GRS and RA GRS use asynchronous replication.

Box 2: StorageV2 (general purpose V2)
ZRS only support GPv2.

Reference:
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-zrs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You plan to use the Azure Import/Export service to copy files to a storage account.
Which two files should you create before you prepare the drives for the import job?
A. an XML manifest file
B. a dataset CSV file
C. a JSON configuration file
D. a PowerShell PS1 file
E. a driveset CSV file

A

B. a dataset CSV file
E. a driveset CSV file

Modify the dataset.csv file in the root folder where the tool resides. Depending on whether you want to import a file or folder or both, add entries in the dataset.csv file.
Modify the driveset.csv file in the root folder where the tool is.

Reference:
https://docs.microsoft.com/en-us/azure/import-export/storage-import-export-service
https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-data-to-files

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You have a Recovery Service vault that you use to test backups. The test backups contain two protected virtual machines.
You need to delete the Recovery Services vault.
What should you do first?

A. From the Recovery Service vault, delete the backup data.
B. Modify the disaster recovery properties of each virtual machine.
C. Modify the locks of each virtual machine.
D. From the Recovery Service vault, stop the backup of each backup item.

A

D. From the Recovery Service vault, stop the backup of each backup item.
You can’t delete a Recovery Services vault if it is registered to a server and holds backup data. If you try to delete a vault, but can’t, the vault is still configured to receive backup data.
Remove vault dependencies and delete vault
In the vault dashboard menu, scroll down to the Protected Items section, and click Backup Items. In this menu, you can stop and delete Azure File Servers, SQL
Servers in Azure VM, and Azure virtual machines.

Reference:
https://docs.microsoft.com/en-us/azure/backup/backup-azure-delete-vault#delete-protected-items-in-the-cloud

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You have an Azure subscription named Subscription1 that contains the resources shown in the following table.

Name Туре Location Resource group
RG1, Resource group, West US, Not applicable
RG2, Resource group, West US, Not applicable
Vault1, Recovery Services vault, Central US, RG1
Vault2, Recovery Services vault, West US, RG2
VM1, Virtual machine, Central US, RG2
storage1, Storage account, West US, RG1
SQL1, Azure SQL database, East US, RG2

In storage1, you create a blob container named blob1 and a file share named share1.
Which resources can be backed up to Vault1 and Vault2?

Can use Vault1 for backups:
- VM1 only
- VM1 and share1 only
- VM1 and SQL1 only
- VM1, storage1, and SQL1 only
- VM1, blob1, share1, and SQL1
Can use Vault2 for backups:
- storage1 only
- share1 only
- VM1 and share1 only
- blob1 and share1 only
- storage1 and SQL1 only

A

Box 1: VM1 only
VM1 is in the same region as Vault1. File1 is not in the same region as Vautl1. SQL is not in the same region as Vault1. Blobs cannot be backup up to service vaults.
Note: To create a Vault to protect VMs, the Vault must be in the same Region as the VMs.

Box 2: Share1 only
Storage1 is in the same region as Vault2. Share1 is in Storage1. Also support by type of backup.
Note: Only VM and Fileshare is allowed to Backup.

Specifically stating BACKUP VAULT supports BLOB, while RECOVERY SERVICES VAULT supports FILE SHARE. Can “configure/create both vaults using BACKUP CENTER”, that is the reason for confusion.

Reference:
https://docs.microsoft.com/bs-cyrl-ba/azure/backup/backup-create-rs-vault
https://docs.microsoft.com/en-us/azure/backup/backup-afs
https://feedback.azure.com/forums/217298-storage/suggestions/37096837-possibility-to-backup-blob-data-in-the-recovery-se

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have an Azure subscription named Subscription1.
You have 5 TB of data that you need to transfer to Subscription1.
You plan to use an Azure Import/Export job.
What can you use as the destination of the imported data?

A. a virtual machine
B. an Azure Cosmos DB database
C. Azure File Storage
D. the Azure File Sync Storage Sync Service

A

C. Azure File Storage
(or Azure Blob Storage)
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files. The maximum size of an Azure Files Resource of a file share is 5 TB.

Note: There are several versions of this question in the exam. The question has two correct answers:
1. Azure File Storage or
2. Azure Blob Storage

The question can have other incorrect answer options, including the following:
✑ Azure Data Lake Store
✑ Azure SQL Database
✑ Azure Data Factory

Reference:
https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

You have an Azure subscription.
You create the Azure Storage account shown in the following exhibit.
Replication: Locally-redundant storage (LRS)
Access tier (default): Hot
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

The minimum number of copies of the storage account will be: 1 / 2 / 3 / 4

To reduce the cost of infrequently accessed data in the storage account, you must modify the [ … ] setting:
- Access tier (default)
- Performance
- Account kind
- Replication

A
  • 3
    Locally Redundant Storage (LRS) provides highly durable and available storage within a single location (sub region). We maintain an equivalent of 3 copies
    (replicas) of your data within the primary location as described in our SOSP paper; this ensures that we can recover from common failures (disk, node, rack) without impacting your storage account’s availability and durability.
  • Access tier (default)
    Change the access tier from Hot to Cool.
    Note: Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. The available access tiers include:
    Hot - Optimized for storing data that is accessed frequently.
    Cool - Optimized for storing data that is infrequently accessed and stored for at least 30 days.
    Archive - Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours).
    Reference:
    https://azure.microsoft.com/en-us/blog/data-series-introducing-locally-redundant-storage-for-windows-azure-storage/ https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You have an Azure Storage account named storage1.
You plan to use AzCopy to copy data to storage1.
You need to identify the storage services in storage1 to which you can copy the data.
Which storage services should you identify?

A. blob, file, table, and queue
B. blob and file only
C. file and table only
D. file only
E. blob, table, and queue only

A

B. blob and file only

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account.
Incorrect Answers:
A, C, E: AzCopy does not support table and queue storage services.
D: AzCopy supports file storage services, as well as blob storage services.
Reference:
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
You have an Azure Storage account named storage1 that uses Azure Blob storage and Azure File storage. You need to use AzCopy to copy data to the blob storage and file storage in storage1. Which authentication method should you use for each type of storage? Blob storage: File storage: - Azure Active Directory (Azure AD) only - Shared access signatures (SAS) only - Access keys and shared access signatures (SAS) only - Azure Active Directory (Azure AD) and shared access signatures (SAS) only - Azure Active Directory (Azure AD), access keys, and shared access signatures (SAS)
Blob storage: - Azure Active Directory (Azure AD) and shared access signatures (SAS) only File storage: - Shared access signatures (SAS) only You can provide authorization credentials by using Azure Active Directory (AD), or by using a Shared Access Signature (SAS) token. Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10
26
You have an Azure subscription that contains an Azure Storage account. You plan to create an Azure container instance named container1 that will use a Docker image named Image1. Image1 contains a Microsoft SQL Server instance that requires persistent storage. You need to configure a storage service for Container1. What should you use? A. Azure Files B. Azure Blob storage C. Azure Queue storage D. Azure Table storage
A. Azure Files Azure files are used as persistent disks for docker images. It doesn't matter the type of the image or its functionality.
27
You have an app named App1 that runs on two Azure virtual machines named VM1 and VM2. You plan to implement an Azure Availability Set for App1. The solution must ensure that App1 is available during planned maintenance of the hardware hosting VM1 and VM2. What should you include in the Availability Set? A. one update domain B. two fault domains C. one fault domain D. two update domains
D. two update domains An update domain is a group of VMs and underlying physical hardware that can be rebooted at the same time. VMs in the same fault domain share common storage as well as a common power source and network switch. When you create an Availability Set, the hardware in a location is divided into multiple update domains and fault domains. During scheduled maintenance, only one update domain is updated at any given time. Update domains aren't necessarily updated sequentially. So, we need two update domains. Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-availability-sets https://docs.microsoft.com/en-us/azure/virtual-machines/manage-availability https://docs.microsoft.com/en-us/azure/virtual-machines/maintenance-and-updates
28
You have an Azure subscription that contains an Azure file share. You have an on-premises server named Server1 that runs Windows Server 2016. You plan to set up Azure File Sync between Server1 and the Azure file share. You need to prepare the subscription for the planned Azure File Sync. Which two actions should you perform in the Azure subscription? - Create a Storage Sync Service - Install the Azure File Sync agent - Create a sync group - Run Server Registration First action: ... Second action: ...
First action: Create Storage Sync Service Second action: Create a Sync Group As they are asking for "Which two actions should you perform in the Azure subscription?". Its actions on the subscription/azure portal and does not ask for actions on the server.
29
You have an Azure subscription that contains the file shares shown in the following table. Name Location share1 West US share2 West US share3 East US You have the on-premises file shares shown in the following table. Name Server Path data1 Server1 D:\Folder1 data2 Server2 E:\Folder2 data3 Server3 E:\Folder2 You create an Azure file sync group named Sync1 and perform the following actions: ✑ Add share1 as the cloud endpoint for Sync1. ✑ Add data1 as a server endpoint for Sync1. ✑ Register Server1 and Server2 to Sync1. Yes/No You can add share3 as an additional cloud endpoint for Sync 1. You can add data2 as an additional server endpoint for Sync1. You can add data3 as an additional server endpoint for Sync1.
Box 1: No A sync group must contain one cloud endpoint, which represents an Azure file share and one or more server endpoints. Box 2: Yes Data2 is located on Server2 which is registered to Sync1. But data2 is not added to server endpoint, so we can add data2 as additional server endpoint for Sync1. Box 3: No Data3 is located on Server3 which is not registered to Sync1. We have to register Server3 first. Reference: https://docs.microsoft.com/en-us/azure/storage/file-sync/file-sync-deployment-guide?tabs=azure-portal%2Cproactive-portal#create-a-sync-group-and-a-%20cloud-endpoint
30
You have an Azure subscription named Subscription1 that contains the resources shown in the following table: Name Туре Location Resource group RG1, Resource group, East US, Not applicable RG2, Resource group, West US, Not applicable Vault1, Recovery Services vault, West Europe, RG1 storage1, Storage account, East US, RG2 storage2, Storage account, West US, RG1 storage3, Storage account, West Europe, RG2 Analytics1, Log Analytics workspace, East US, RG1 Analytics2, Log Analytics workspace, West US, RG2 Analytics3, Log Analytics workspace, West Europe, RG1 You plan to configure Azure Backup reports for Vault1. You are configuring the Diagnostics settings for the AzureBackupReports log. Which storage accounts and which Log Analytics workspaces can you use for the Azure Backup reports of Vault1? Storage accounts: storage1 only storage2 only storage3 only storage1, storage2, and storage3 Log Analytics workspaces: Analytics1 only Analytics2 only Analytics3 only Analytics1, Analytics2, and Analytics3
Storage accounts: Storage 3 only Storage Account must be in the same Region as the Recovery Services Vault. Log Analytics workspaces: Analytics1, Analytics2, and Analytics3 Set up one or more Log Analytics workspaces to store your Backup reporting data. The location and subscription where this Log Analytics workspace can be created is independent of the location and subscription where your Vaults exist. Reference: https://docs.microsoft.com/en-us/azure/backup/configure-reports#1-create-a-log-analytics-workspace-or-use-an-existing-one
31
You have an Azure subscription that contains the storage accounts shown in the following exhibit. Name Type Kind Resource group Location contoso101, Storage account, Storage V2, RG1, East US contoso102, Storage account, Storage, RG1, East US contoso103, Storage account, BlobStorage, RG1, East US contoso104, Storage account, FileStorage, RG1, East US Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. You can create a premium file share in: contoso101only contoso104 only contoso101 or contoso104 only contoso101, contoso102, or contoso104 only contoso101, contoso102, contoso103, or contoso104 You can use the Archive access tier in: contoso101only contoso101 or contoso103 only contoso101, contoso102, and contoso103 only contoso101, contoso102, and contoso104 only contoso101, contoso102, contoso103, and contoso104
Box 1: contoso104 only Premium file shares are hosted in a special purpose storage account kind, called a FileStorage account. Box 2: contoso101 and contos103 only Object storage data tiering between hot, cool, and archive is supported in Blob Storage and General Purpose v2 (GPv2) accounts. General Purpose v1 (GPv1) accounts don't support tiering. The archive tier supports only LRS, GRS, and RA-GRS. Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-create-premium-fileshare?tabs=azure-portal https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers
32
You have an Azure subscription named Subscription1. In Subscription1, you create an Azure file share named share1. You create a shared access signature (SAS) named SAS1 as shown in the following exhibit: Allowed services Blob - File + Queue - Table - Allowed resource types Service + Container + Object + Allowed permissions Read + Write + Delete - List + Add - Create - Update - Process - Start and expiry date/time 2018-09-01 2:00:00 PM 2018-09-14 2:00:00 PM Allowed IP addresses 193.77.134.10-193.77.134.50 Allowed protocols + HTTPS only - HTTPS and HTTP Signing key: key1 If on September 2, 2018, you run Microsoft Azure Storage Explorer on a computer that has an IP address of 193.77.134.1, and you use SAS1 to connec to the storage account, you [ ... ]. If on September 10, 2018, you run the net use command on a computer that has an IP address of 193.77.134.50, and you use SAS1 as the password to connect to share1, you [ ... ]. - will be prompted for credentials - will have no access - will have read, write, and list access - will have read-only access
Box 1: will have no access The IP 193.77.134.1 does not have access on the SAS, because it is not matching the SAS requirements. IP is out of range. Box 2: will have no access The SAS token is not supported in mounting Azure File share currently, it just supports the Azure storage account key. Since it is using "net use" where it uses SMB, the SMB (Server Message Broker) protocol does not support SAS. it still asks for username/password. Accordingly, it will give error wrong username/pass and will not provide access. Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1 https://docs.microsoft.com/en-us/azure/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows https://docs.microsoft.com/en-us/answers/questions/40741/sas-key-for-unc-path.html
33
You have two Azure virtual machines named VM1 and VM2. You have two Recovery Services vaults named RSV1 and RSV2. VM2 is backed up to RSV1. You need to back up VM2 to RSV2. What should you do first? A. From the RSV1 blade, click Backup items and stop the VM2 backup B. From the RSV2 blade, click Backup. From the Backup blade, select the backup for the virtual machine, and then click Backup C. From the VM2 blade, click Disaster recovery, click Replication settings, and then select RSV2 as the Recovery Services vault D. From the RSV1 blade, click Backup Jobs and export the VM2 job
A. From the RSV1 blade, click Backup items and stop the VM2 backup If you want to change the recovery service vault you need to disassociate the previous RSV and delete the backup data. To delete backup data, you need to stop the backup first. So: 1. Stop the backup in RSV1 (D) 2. Remove the backup data. 3. Disassociate the VM in RSV1. 4. Associate the VM in RSV2.
34
You have a general-purpose v1 Azure Storage account named storage1 that uses locally-redundant storage (LRS). You need to ensure that the data in the storage account is protected if a zone fails. The solution must minimize costs and administrative effort. What should you do first? A. Create a new storage account. B. Configure object replication rules. C. Upgrade the account to general-purpose v2. D. Modify the Replication setting of storage1.
C. Upgrade the account to general-purpose v2. v1 supports GRS/RA-GRS but question was about least cost. Least cost is ZRS which is only supported for v2 and premium file/block storage. Source: https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy#supported-storage-account-types
35
You have an Azure subscription that contains the storage accounts shown in the following table. Name Type Performance storage1, StorageV2, Standard storage2, BlobStorage, Standard storage3, BlockBlobStorage, Premium storage4, FileStorage, Premium You plan to manage the data stored in the accounts by using lifecycle management rules. To which storage accounts can you apply lifecycle management rules? A. storage1 only B. storage1 and storage2 only C. storage3 and storage4 only D. storage1, storage2, and storage3 only E. storage1, storage2, storage3, and storage4
D. storage1, storage2, and storage3 only The lifecycle management feature is available in all Azure regions for general purpose v2 (GPv2) accounts, blob storage accounts, premium block blobs storage accounts, and Azure Data Lake Storage Gen2 accounts. Storage account type and kind are mixed here. Also at this point, this is all legacy. Storage account types offered now without switching to legacy are simply standard (gpv2) and premium. Even in legacy, there isn't any such storage account type as "filestorage", so storage4 as listed is not valid, period.
36
You create an Azure Storage account named contosostorage. You plan to create a file share named data. Users need to map a drive to the data file share from home computers that run Windows 10. Which outbound port should you open between the home computers and the data file share? A. 80 B. 443 C. 445 D. 3389
C. 445, as this is port for SMB protocol to share files Server Message Block (SMB) is used to connect to an Azure file share over the internet. The SMB protocol requires TCP port 445 to be open. Incorrect Answers: A: Port 80 is required for HTTP to a web server B: Port 443 is required for HTTPS to a web server D: Port 3389443 is required for Remote desktop protocol (RDP) connections Reference: https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows
37
You have an Azure subscription that contains an Azure Storage account named storageaccount1. You export storageaccount1 as an Azure Resource Manager template. The template contains the following sections. { "type": "Microsoft.Storage/storageAccount", "apiVersion": "2019-06-01", "name": "storageaccount1", "location": "eastus", "sku": { "name": "Standard_LRS", "tier": "Standard" }, "kind": "StorageV2", "properties": { "networkAcls": { "bypass": "AzureServices", "virtualNetworkRules": [], "ipRules": [], "defaultAction": "Allow", }, "supportsHttpsTrafficOnly": true, "encryption": { "services": { "file": { "keyType": "Account", "enabled": true }, "blob": { "keyType": "Account", "enabled": true }, }, "keySource": "Microsoft.Storage" }, "accessTier": "Hot" } } Yes/No A server that has a public IP address of 131.107.103.10 can access storageaccount1. Individual blobs in storageaccount1 can be set to use the archive tier. Global administrations in Azure Active Directory (Azure AD) can access a file share hosted in storageaccount1 by using their Azure AD credentials.
Box 1- Yes. VirtualNetworkRules & IpRules are blank, with the default action Allow. Defaultaction is allow. IP is allowed. Box 2- Yes. Individual blobs can be set to the archive tier. Storagev2 allows tiering. Bob 3. No. File share access requires SAS. To access blob data in the Azure portal with Azure AD credentials, a user must have the following role assignments: A data access role, such as Storage Blob Data Contributor; The Azure Resource Manager Reader role. Ref https://docs.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview https://docs.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access?tabs=portal
38
You have an Azure subscription that contains a storage account named storage1. You have the devices shown in the following table. Name Platform Device1 Windows 10 Device2 Linux Device3 macOS From which devices can you use AzCopy to copy data to storage1? A. Device 1 only B. Device1, Device2 and Device3 C. Device1 and Device2 only D. Device1 and Device3 only
B. Device1, Device2 and Device3 AzCopy is supported in all these three operating systems: https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10#download-azcopy
39
You have an Azure Storage account named storage1 that contains a blob container named container1. You need to prevent new content added to container1 from being modified for one year. What should you configure? A. the access tier B. an access policy C. the Access control (IAM) settings D. the access level
B. an access policy With a time-based retention policy, users can set policies to store data for a specified interval. When a time-based retention policy is set, objects can be created and read, but not modified or deleted. After the retention period has expired, objects can be deleted but not overwritten. https://docs.microsoft.com/en-us/azure/storage/blobs/immutable-storage-overview?tabs=azure-portal https://docs.microsoft.com/en-us/azure/storage/blobs/immutable-time-based-retention-policy-overview
40
You have an Azure Storage account named storage1 that contains a blob container. The blob container has a default access tier of Hot. Storage1 contains a container named container1. You create lifecycle management rules in storage1 as shown in the following table. Name, Rule scope, Blob type, Blob subtype, Rule block, Prefix match Rule 1, Limit blobs by using filters., Block blobs, Base blobs, If base blobs were not modified for two days, move to archive storage. If base blobs were not modified for nine days, delete the blob., container1/Dep1 Rule2, Apply to all blobs in storage1., Block blobs, Base blobs, If base blobs were not modified for three days, move to cool storage. If base blobs were not modified for nine days, move to archive storage., Not applicable You perform the actions shown in the following table. Date Action October 1, Upload three files named Dep1File1.docx, File2.docx, and File3.docx to container1. October 2, Edit Dep1File1.docx and File3.docx. October 5, Edit File2.docx. Yes/No On October 10, you can read Dep1File1.docx. On October 10, you can read File2.docx. On October 10, you can read File3.docx.
The question asks if you can read the files on the 10th, not if they still exist. Files in the archive tier CANNOT be read as documented by Microsoft: "While a blob is in archive storage, the blob data is offline and can't be read or modified. To read or download a blob in archive, you must first rehydrate it to an online tier." https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers Dep1File1.docx was last updated 8 days ago, and would be in archive tier File2.docx was last updated 5 days ago, and would be in cool tier File3.docx was last updated 8 days ago and would be in cool tier Dep1File1 > No cannot be read File2 > Yes can be read File3 > Yes can be read
41
You are configuring Azure Active Directory (Azure AD) authentication for an Azure Storage account named storage1. You need to ensure that the members of a group named Group1 can upload files by using the Azure portal. The solution must use the principle of least privilege. Which two roles should you configure for storage1? A. Storage Account Contributor B. Storage Blob Data Contributor C. Reader D. Contributor E. Storage Blob Data Reader
B. Storage Blob Data Contributor C. Reader To access blob data in the Azure portal with Azure AD credentials, a user must have the following role assignments: * A data access role, such as Storage Blob Data Reader or Storage Blob Data Contributor * The Azure Resource Manager Reader role, at a minimum The Reader role is an Azure Resource Manager role that permits users to view storage account resources, but not modify them. It does not provide read permissions to data in Azure Storage, but only to account management resources. The Reader role is necessary so that users can navigate to blob containers in the Azure portal. Note: in order from least to greatest permissions: The Reader and Data Access role - The Storage Account Contributor role The Azure Resource Manager Contributor role The Azure Resource Manager Owner role Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access The following line says it all: "The Reader role is an Azure Resource Manager role that permits users to view storage account resources, but not modify them. It does not provide read permissions to data in Azure Storage, but only to account management resources. The Reader role is necessary so that users can navigate to blob containers in the Azure portal. For example, if you assign the Storage Blob Data Contributor role to user Mary at the level of a container named sample-container, then Mary is granted read, write, and delete access to all of the blobs in that container. However, if Mary wants to view a blob in the Azure portal, then the Storage Blob Data Contributor role by itself will not provide sufficient permissions to navigate through the portal to the blob in order to view it. The additional permissions are required to navigate through the portal and view the other resources that are visible there." - https://docs.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access?tabs=portal
42
You have an Azure Storage account named storage1 that stores images. You need to create a new storage account and replicate the images in storage1 to the new account by using object replication. How should you configure the new account? Account type: ... - StorageV2 only - StorageV2 or FileStorage only - StorageV2 or BlobStorage only - StorageV2, BlobStorage, or FileStorage Object type to create in the new account: ... - Container - File share - Table - Queue
Account type: - StorageV2 or BlobStorage only Object type to create in the new account: - Container Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/object-replication-overview Object Replication supports General Purpose V2 and Premium Blob accounts. Blob versioning should be enabled on both the source and destination storage account. Change feed is enabled on the source storage account.
43
You have an on-premises server that contains a folder named D:\Folder1. You need to copy the contents of D:\Folder1 to the public container in an Azure Storage account named contosodata. Which command should you run? A. https://contosodata.blob.core.windows.net/public B. azcopy sync D:\folder1 https://contosodata.blob.core.windows.net/public --snapshot C. azcopy copy D:\folder1 https://contosodata.blob.core.windows.net/public --recursive D. az storage blob copy start-batch D:\Folder1 https://contosodata.blob.core.windows.net/public
C. azcopy copy D:\folder1 https://contosodata.blob.core.windows.net/public --recursive The azcopy copy command copies a directory (and all of the les in that directory) to a blob container. The result is a directory in the container by the same name. Incorrect Answers: A: URL of the Storage Account. B: The azcopy sync command replicates the source location to the destination location. However, the le is skipped if the last modied time in the destination is more recent. D: The az storage blob copy start-batch command copies multiple blobs to a blob container. Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-blobs https://docs.microsoft.com/enus/azure/storage/common/storage-ref-azcopy-copy
44
You have an Azure subscription. In the Azure portal, you plan to create a storage account named storage1 that will have the following settings: ✑ Performance: Standard ✑ Replication: Zone-redundant storage (ZRS) ✑ Access tier (default): Cool ✑ Hierarchical namespace: Disabled You need to ensure that you can set Account kind for storage1 to BlockBlobStorage. Which setting should you modify first? A. Performance B. Replication C. Access tier (default) D. Hierarchical namespace
A. Performance https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal Select Standard performance for general-purpose v2 storage accounts (default). This type of account is recommended by Microsoft for most scenarios. For more information, see Types of storage accounts. Select Premium for scenarios requiring low latency. After selecting Premium, select the type of premium storage account to create. The following types of premium storage accounts are available: Block blobs File shares Page blobs
45
You have an Azure subscription that contains the storage accounts shown in the following table. Name Azure Active Directory (Azure AD) authentication Contents storage1 Enabled A blob container named container1 that has a public access level of No public access storage2 Enabled A file share named share1 You plan to use AzCopy to copy a blob from container1 directly to share1. You need to identify which authentication method to use when you use AzCopy. What should you identify for each account? Select and Place: storage1: ... storage2: ... Methods: - OAuth - Anonymous - A storage account access key - A shared access signature (SAS) token
storage1: - A shared access signature (SAS) token You can provide authorization credentials by using Azure Active Directory (AD), or by using a Shared Access Signature (SAS) token. For Blob storage you can use Azure AD & SAS. Note: In the current release, if you plan to copy blobs between storage accounts, you'll have to append a SAS token to each source URL. You can omit the SAS token only from the destination URL. storage2: - A shared access signature (SAS) token For File storage you can only use SAS. Blob Storage: Support both Azure Active Directory (AD) and Shared Access Signature (SAS) token. File Storage: Only Shared Access Signature (SAS) token is supported. Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json#authorize-azcopy
46
You create an Azure Storage account. You plan to add 10 blob containers to the storage account. For one of the containers, you need to use a different key to encrypt data at rest. What should you do before you create the container? A. Generate a shared access signature (SAS). B. Modify the minimum TLS version. C. Rotate the access keys. D. Create an encryption scope.
D. Create an encryption scope In Azure Storage, encryption of data at rest is done using Azure Storage Service Encryption (SSE). Azure Storage SSE uses Microsoft-managed encryption keys to encrypt the data in the storage account. In the scenario described, you need to use a different key to encrypt data at rest for one of the containers. To do this, you need to create an encryption scope, which is a named configuration that defines the default encryption settings for a container. By creating an encryption scope, you can use a customer-managed key, stored in Azure Key Vault, to encrypt the data in that specific container. Encryption scopes enable you to manage encryption with a key that is scoped to a container or an individual blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers. Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/encryption-scope-overview
47
You have an Azure subscription. The subscription contains a storage account named storage1 that has the lifecycle management rules shown in the following table. Name Blob prefix If base're last modif. more than (days ago) Then Rule1 container1/ 3 days Move to archive storage Rule2 Not applicable 5 days Move to cool storage Rule3 container2/ 10 days Delete the blob Rule4 container2/ 15 days Move to archive storage On June 1, you store two blobs in storage1 as shown in the following table. Name Location Access tier File1 container1 Hot File2 container2 Hot Yes/No On June 6, File1 will be stored in the Cool access tier. On June 1, File2 will be stored in the Cool access tier. On June 16, File2 will be stored in the Archive access tier.
On June 6, File1 will be stored in the Cool access tier. - No On June 1, File2 will be stored in the Cool access tier. - No On June 16, File2 will be stored in the Archive access tier. - No
48
You have an Azure subscription. You plan to deploy a storage account named storage1 by using the following Azure Resource Manager (ARM) template. "properties" : { "allowBlobPublicAccess" : true, }, "sku": { "name" : "Standard_LRS" }, "kind" : "Storage_V2" "properties" : { "restorePolicy" : { "enabled" : true, "days" : 6 }, "deleteRetentionPolicy" : { "enabled" : true, "days" : 7 }, "containerDeleteRetentionPolicy" : { "enabled" : true, "days" : 7 } } Yes/No Changes made to the data in storage1 can be rolled back after seven days. Only users located in the East US Azure region can connect to storage1. Three copies of storage1 will be maintained in the East US Azure region.
Changes made to the data in storage1 can be rolled back after seven days. - No Only users located in the East US Azure region can connect to storage1. - No Three copies of storage1 will be maintained in the East US Azure region. - Yes deleteRetentionPolicy is 7 days, so can not be restored after 7 days. Means, backup is deleted after 7 days. allowBlobPublicAccess is true, so anyone can access the blob, not just on Azure. kind is Standard_LRS, so 3 local copies are stored.
49
You have an on-premises server that contains a folder named D:\Folder1. You need to copy the contents of D:\Folder1 to the public container in an Azure Storage account named contosodata. Which command should you run? A. az storage blob copy start D:\Folder1 https://contosodata.blob.core.windows.net/public B. azcopy sync D:\folder1 https://contosodata.blob.core.windows.net/public --snapshot C. azcopy copy D:\folder1 https://contosodata.blob.core.windows.net/public --recursive D. az storage blob copy start-batch D:\Folder1 https://contosodata.blob.core.windows.net/public
C. azcopy copy D:\folder1 https://contosodata.blob.core.windows.net/public --recursive
50
You have an Azure subscription that contains a storage account named storage1. The storage1 account contains a container named container1. You need to create a lifecycle management rule for storage1 that will automatically move the blobs in container1 to the lowest-cost tier after 90 days. How should you complete the rule? "baseBlob" : { [...] - "enableAutoTierToHotFromCool": { - "tierToArchive": { - "tierToCool": { "daysAfterModificationGreaterThan": 90 "filter": { [...] - "blobIndexMatch": [ - "blobTypes": [ - "prefixMatch": [ "container1/"
"baseBlob" : - "tierToArchive": { "filter": - "prefixMatch": [ - tierToArchive because it's the lowest cost tier, and doesnt say anyhting about needing to read data after 90 days. However, rehydration costs will occur if they did need to read it. - prefixMatch because we only want the blob in the container1.
51
You have an Azure subscription that contains a virtual machine named VM1. You need to back up VM1. The solution must ensure that backups are stored across three availability zones in the primary region. Which three actions should you perform in sequence? Configure a replication policy. Set Replication to Zone-redundant storage (ZRS). For VM1, create a backup policy and configure the backup. Set Replication to Locally-redundant storage (LRS). Create a Recovery Services vault.
Create a Recovery Services vault. Set Replication to Zone-redundant storage (ZRS). For VM1, create a backup policy and configure the backup. 1. Create a Recovery Services vault: This is the first step to set up a backup solution in Azure. The Recovery Services vault will store the backup data. 2. Set Replication to Zone redundant storage (ZRS): This ensures that the backup data is replicated across three availability zones in the primary region, providing high availability and durability. 3. For VM1, create a backup policy and configure the backup: This step involves creating a backup policy that defines the schedule and retention of the backups, and then applying this policy to VM1 to start the backup process.
52
You have an Azure subscription named Subscription1. You have 5 TB of data that you need to transfer to Subscription1. You plan to use an Azure Import/Export job. What can you use as the destination of the imported data? A. an Azure Cosmos DB database B. Azure File Storage C. Azure SQL Database D. a virtual machine
B. Azure File Storage Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files. The maximum size of an Azure Files Resource of a file share is 5 TB. Note: There are several versions of this question in the exam. The question has two correct answers: 1. Azure File Storage or 2. Azure Blob Storage The question can have other incorrect answer options, including the following: ✑ Azure Data Lake Store ✑ Azure SQL Database ✑ Azure Data Factory Reference: https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-service
53
You have an Azure subscription that contains the resources shown in the following table. Name Туре storage1 Storage account container1 Blob container table1 Storage table You need to perform the tasks shown in the following table. Name Туре Task1 Create a new storage account. Task2 Upload an append blob to container1. Task3 Create a file share in storage1. Task4 Add data to table1. Which tasks can you perform by using Azure Storage Explorer? A. Task1 and Task3 only B. Task1, Task2, and Task3 only C. Task1, Task3, and Task4 only D. Task2, Task3, and Task4 only E. Task1, Task2, Task3, and Task4
D. Task2, Task3, and Task4 only Azure Storage Explorer does what is states, it explores Storage, not create it.
54
You have an Azure AD user named User1 and a read-access geo-redundant storage (RA-GRS) account named contoso2023. You need to meet the following requirements: * User1 must be able to write blob data to contoso2023. * The contoso2023 account must fail over to its secondary endpoint. Which two settings should you congure? Storage account: contoso2023 - Diagnose and solve problems - Access Control (IAM) - Data migration - Events - Storage browser Data storage - Containers - File shares - Queues - Tables Security+ networking - Networking - Azure CDN - Access keys - Shared access signature - Encryption - Microsoft Defender for Cloud Data management - Geo-replication - Data protection - Object replication - Blob inventory - Static website - Lifecycle management
- Access Control (IAM) - Geo-replication "Geo-replication" is now changed to "Redundancy" by name. They are the same settings, just a new name.
55
You have an Azure subscription that contains a storage account named storage1. You plan to create a blob container named container1. You need to use customer-managed key encryption for container1. Which key should you use? A. an EC key that uses the P-384 curve only B. an EC key that uses the P-521 curve only C. an EC key that uses the P-384 curve or P-521 curve only D. an RSA key with a key size of 4096 only E. an RSA key type with a key size of 2048, 3072, or 4096 only
D. an RSA key with a key size of 4096 only "Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096" https://learn.microsoft.com/en-us/azure/storage/common/customer-managed-keys-overview#enable-customer-managed-keys-for-a-storage-account
56
You have an Azure subscription that contains a user named User1 and a storage account named storage1. The storage1 account contains the resources shown in the following table. Name Type container1 Container folder1 File share Table1 Table User1 is assigned the following roles for storage1: * Storage Blob Data Reader * Storage Table Data Contributor * Storage File Data SMB Share Contributor For storage1, you create a shared access signature (SAS) named SAS1 that has the settings shown in the following exhibit. (Click the Exhibit tab.) [] Blob [] File [] Queue [+] Table All permissions selected To which resources can User1 write by using SAS1 and key1? key1: ... SAS1: ... - Table 1 only - Table1 and container1 only - folder1 and Table1 only - folder1 and container1 only - Table 1, folder1, and container1
key1: - folder1 and Table1 only SAS1: - Table1 and container1 only ??? key1: folder1, container1, table1 SAS1: table1 I think that key 1 is the key of storage account which is created when creating storage account. Thus, it should be able to access all in storage account. With The access Key you are like the owner of the storage SAS1 allows table only which is shown in the exhibit. ref: https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?tabs=azure-portal#regenerate-access-keys
57
You have an Azure subscription that contains the storage account shown in the following exhibit. Stored access policies: {list of two} Immutable blob storage policies {list of one} Dropdowns: The maximum number of additional stored access policies that you can create for container1 is [answer choice]: 0, 1, 3, 5, 6 The maximum number of additional immutable blob storage policies that you can create for container1 is [answer choice]: 0, 1, 2, 4, 5
The maximum number of additional stored access policies that you can create for container1 is [answer choice]: 3 The maximum number of additional immutable blob storage policies that you can create for container1 is [answer choice]: 1 Max stored access policies: 3, because max total of stored access policy is 5 and we already have 2, so additional 3 available. Max immutable blob storage: 1, because max total of immutable blob storage policy is 2 - one Legal hold policy and one Time-based retention policy. We already have one, so additional 1 available.
58
You have an Azure subscription. The subscription contains a storage account named storage1 that has the lifecycle management rules shown in the following table. Name If base blobs're last modified more than (days) Then Rule1 5 days Move to cool storage Rule2 5 days Delete the blob Rule3 5 days Move to archive storage On June 1, you store a blob named File1 in the Hot access tier of storage1. What is the state of File1 on June 7? A. stored in the Cool access tier B. stored in the Archive access tier C. stored in the Hot access tier D. deleted
D. deleted If you define more than one action on the same blob, lifecycle management applies the least expensive action to the blob. For example, action delete is cheaper than action tierToArchive. Action tierToArchive is cheaper than action tierToCool. https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview
59
You have an Azure subscription that contains the storage accounts shown in the following table. Name Kind Redundancy storage1 StorageV2 Geo-zone-redundant storage (GZRS) storage2 BlobStorage Read-access geo-redundant storage (RA-GRS) storage3 BlockBlobStorage Zone-redundant storage (ZRS) You need to identify which storage accounts support lifecycle management, and which storage accounts support moving data to the Archive access tier. Which storage accounts should you use? Answer Area Lifecycle management: The Archive access tier: storage1 only storage2 only storage1 and storage3 only storage2 and storage3 only storage1, storage2, and storage3
Lifecycle management: storage1, storage2, and storage3 Lifecycle management policies are supported for block blobs and append blobs in general-purpose v2, premium block blob, and Blob Storage accounts. The Archive access tier: storage2 only Only storage accounts that are configured for LRS, GRS, or RA-GRS support moving blobs to the archive tier. The archive tier isn't supported for ZRS, GZRS, or RA-GZRS accounts. https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview
60
You have an Azure subscription that contains a storage account named storage1. The storage1 account contains a container named container1. You create a blob lifecycle rule named rule1. You need to congure rule1 to automatically move blobs that were NOT updated for 45 days from contained to the Cool access tier. How should you complete the rule? "tierToCool" : { [...] : 45 - "daysAfterCreationGreaterThan" - "daysAfterLastAccessTimeGreaterThan" - "daysAfterModificationGreaterThan" "blobTypes": { [...] - "AppendBlob" - "BlockBlob" - "PageBlob"
"tierToCool" : - "daysAfterModificationGreaterThan" "blobTypes": - "BlockBlob" https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview#rule-actions daysAfterModificationGreaterThan - The condition for actions on a current version of a blob Tiering is not yet supported in a premium block blob storage account. For all other accounts, tiering is allowed only on block blobs and not for append and page blobs. tierToCool - Supported for blockBlob
61
You plan to create an Azure Storage account named storage1 that will contain a file share named share1. You need to ensure that share1 can support SMB Multichannel. The solution must minimize costs. How should you congure storage? A. Premium performance with locally-redundant storage (LRS) B. Standard performance with zone-redundant storage (ZRS) C. Premium performance with geo-redundant storage (GRS) D. Standard performance with locally-redundant storage (LRS)
A. Premium performance with locally-redundant storage (LRS) According to documentation only Premium file shares (FileStorage), LRS/ZRS are supported for SMB. https://learn.microsoft.com/en-us/azure/storage/files/storage-files-smb-multichannel-performance
62
You have an Azure subscription that contains a storage account named storage1. You plan to use conditions when assigning role-based access control (RBAC) roles to storage1. Which storage1 services support conditions when assigning roles? A. containers only B. file shares only C. tables only D. queues only E. containers and queues only F. files shares and tables only
E. containers and queues only Conditions in Role-Based Access Control (RBAC) are supported for specific storage account services, and as of recent updates in Azure, containers (blob storage) and queues support conditions for role assignments. This feature allows for fine-grained access control based on specific conditions, such as IP addresses, resource tags, or other parameters. Other services, like file shares and tables, do not yet support conditions when assigning RBAC roles.
63
You have an Azure subscription that contains the resource groups shown in the following table. Name Region RG1 West US RG2 West US RG3 East US The subscription contains the virtual networks shown in the following table. Name Resource group Region Subnet Subnet IP address space VNet1 RG1 West US Subnet1 10.1.0.0/16 VNet2 RG2 Central US Subnet2 10.2.0.0/24 VNet3 RG3 East US Subnet3 10.3.0.0/24 You plan to deploy the Azure Kubernetes Service (AKS) clusters shown in the following table. Name Resource group Region Number of nodes Network configuration AKS1 RG1 West US 30 Azure Container Network Interface (CNI) AKS2 RG2 West US 100 Azure Container Network Interface (CNI) AKS3 RG3 East US 50 Kubenet Yes/No You can deploy AKS1 to VNet2. You can deploy AKS2 to VNet1. You can deploy AKS3 to VNet3.
You can deploy AKS1 to VNet2. - No You can deploy AKS2 to VNet1. - Yes You can deploy AKS3 to VNet3. - Yes 1. N subnet is not in the same location as cluster "If you want to select an existing virtual network, make sure it's in the same location and Azure subscription as your Kubernetes cluster." https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni 2. Y azure cni network in same loacation as cluster and within the total pod no. limit 3. Y "Bring your own subnet and route table with kubenet .With kubenet, a route table must exist on your cluster subnet(s). AKS supports bringing your own existing subnet and route table." https://learn.microsoft.com/en-us/azure/aks/configure-kubenet#prerequisites
64
You plan to deploy several Azure virtual machines that will run Windows Server 2019 in a virtual machine scale set by using an Azure Resource Manager template. You need to ensure that NGINX is available on all the virtual machines after they are deployed. What should you use? A. the Publish-AzVMDscConguration cmdlet B. Azure Application Insights C. a Desired State Conguration (DSC) extension D. Azure AD Application Proxy
C. a Desired State Conguration (DSC) extension Note: There are several versions of this question in the exam. The question has two correct answers: 1. a Desired State Configuration (DSC) extension 2. Azure Custom Script Extension The question can have other incorrect answer options, including the following: ✑ the Publish-AzVMDscConfiguration cmdlet ✑ Azure Application Insights Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/tutorial-install-apps-template https://docs.microsoft.com/en-us/samples/mspnp/samples/azure-well-architected-framework-sample-state-configuration https://docs.microsoft.com/en-us/azure/architecture/framework/devops/automation-configuration
65
You have an Azure subscription that has offices in the East US and West US Azure regions. You plan to create the storage account shown in the following exhibit. Dropdowns: To minimize the network costs of accessing adatum22, modify the [answer choice] setting: - Default routing tier - Endpoint type - Location - Network connectivity - Performance After adatum22 is created, you can modify the [answer choice] setting: - Enable infrastructure encryption - Enable support for customer-managed keys - Encryption type - Premium account type
To minimize the network costs of accessing adatum22, modify the [answer choice] setting: - Performance or Default routing tier ???))) After adatum22 is created, you can modify the [answer choice] setting: - Encryption type https://learn.microsoft.com/en-us/azure/storage/common/network-routing-preference https://learn.microsoft.com/en-gb/azure/storage/common/infrastructure-encryption-enable?tabs=portal
66
You have an Azure subscription. You plan to deploy a new storage account. You need to congure encryption for the account. The solution must meet the following requirements: * Use a customer-managed key stored in a key vault. * Use the maximum supported bit length. Which type of key and which bit length should you use? Dropdowns: Key: - AES - 3DES - RSA Bit length: - 2048 - 3072 - 4096 - 8192
Key: - RSA Bit length: - 4096 https://learn.microsoft.com/en-us/azure/storage/common/customer-managed-keys-overview#key-vault-requirements
67
You have an Azure Storage account that contains 5,000 blobs accessed by multiple users. You need to ensure that the users can view only specic blobs based on blob index tags. What should you include in the solution? A. a role assignment condition B. a stored access policy C. just-in-time (JIT) VM access D. a shared access signature (SAS)
A. a role assignment condition An Azure role assignment condition is an optional check that you can add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. https://learn.microsoft.com/en-us/azure/role-based-access-control/conditions-role-assignments-portal (some people think D SAS)
68
You have an Azure Storage account named storage1. For storage1, you create an encryption scope named Scope1. Which storage types can you encrypt by using Scope? A. file shares only B. containers only C. file shares and containers only D. containers and tables only E. file shares, containers, and tables only F. file shares, containers, tables, and queues
B. containers only Encryption scopes enable you to manage encryption with a key that is scoped to a container or an individual blob. There is no blob in the answer choices. https://learn.microsoft.com/en-us/azure/storage/blobs/encryption-scope-overview#how-encryption-scopes-work Table Storage and Queue Storage do not support customer-managed keys, and they use Microsoft-managed keys.
69
You have an Azure subscription. You plan to create a role denition to meet the following requirements: * Users must be able to view the conguration data of a storage account. * Users must be able to perform all actions on a virtual network. * The solution must use the principle of least privilege. What should you include in the role denition for each requirement? Perform all actions on a virtual network: - "Microsoft.Network/virtual Networks/*" - "Microsoft.Network/virtual Networks/delete" - "Microsoft.Network/virtual Networks/write" View the configuration data of a storage account: - "Microsoft.Storage/StorageAccounts/*" - "Microsoft.Storage/StorageAccounts/read" - "Microsoft.Storage/StorageAccounts/blobServices/containers /blob/read"
Perform all actions on a virtual network: - "Microsoft.Network/virtual Networks/*" View the configuration data of a storage account: - "Microsoft.Storage/StorageAccounts/read"
70
You have an Azure subscription that contains a virtual machine named VM1. To VM1, you plan to add a 1-TB data disk that meets the following requirements: * Provides data resiliency in the event of a datacenter outage. * Provides the lowest latency and the highest performance. * Ensures that no data loss occurs if a host fails. You need to recommend which type of storage and host caching to congure for the new data disk. Storage type: - Premium SSD that uses locally-redundant storage (LRS) - Premium SSD that uses zone-redundant storage (ZRS) - Standard SSD that uses locally-redundant storage (LRS) - Standard SSD that uses zone-redundant storage (ZRS) Host caching: - None - Read-only - Read/Write
Storage type: - Premium SSD that uses zone-redundant storage (ZRS) Host caching: - Read/Write
71
You have an Azure virtual machine named VM1 and an Azure key vault named Vault1. On VM1, you plan to configure Azure Disk Encryption to use a key encryption key (KEK). You need to prepare Vault1 for Azure Disk Encryption. Which two actions should you perform on Vault1? A. Select Azure Virtual machines for deployment. B. Create a new key. C. Create a new secret. D. Congure a key rotation policy. E. Select Azure Disk Encryption for volume encryption.
B. Create a new key. E. Select Azure Disk Encryption for volume encryption. 1. **You need to have a key in the Key Vault.** This will be the KEK. Azure Disk Encryption uses BitLocker for Windows VMs, which requires a key for encrypting the data disk. If you're using a KEK, the BEK (BitLocker Encryption Key) will be wrapped by this KEK. 2. **The key vault itself should be configured for Azure Disk Encryption.** This ensures the vault is set up to work with Azure VMs and their disks.
72
You have an Azure subscription that contains a virtual machine named VM1 and an Azure key vault named KV1. You need to congure encryption for VM1. The solution must meet the following requirements: * Store and use the encryption key in KV1. * Maintain encryption if VM1 is downloaded from Azure. * Encrypt both the operating system disk and the data disks. Which encryption method should you use? A. customer-managed keys B. Condential disk encryption C. Azure Disk Encryption D. encryption at host
C. Azure Disk Encryption "You can protect your managed disks by using Azure Disk Encryption for Linux VMs, which uses DM-Crypt, or Azure Disk Encryption for Windows VMs, which uses Windows BitLocker, to protect both operating system disks and data disks with full volume encryption. Encryption keys and secrets are safeguarded in your Azure Key Vault subscription. By using the Azure Backup service, you can back up and restore encrypted virtual machines (VMs) that use Key Encryption Key (KEK) configuration." https://learn.microsoft.com/en-us/azure/security/fundamentals/encryption-overview
73
You have an Azure subscription that contains a storage account named storage1. You need to configure a shared access signature (SAS) to ensure that users can only download blobs securely by name. Which two settings should you congure? Allowed services: [+] Blob Allowed resource types: []Service []Container []Object Allowed permissions: []Read []Write []Delete []List []Add []Create []Update etc.
[+]Object [+]Read Allowed services: Blob (since you want to access blobs). Allowed resource types: Service (if you want users to access all blobs within a container) or Object (if you want users to access a specific blob by name). Allowed permissions: Set to "Read" to allow downloading. Specify the start and expiry date for the token. If you're using a shared access policy, you can select it here. Otherwise, configure the SAS token directly.
74
You have an Azure subscription that contains a storage account named storage1. The storage1 account contains a container named container1. You need to configure access to container1. The solution must meet the following requirements: * Only allow read access. * Allow both HTTP and HTTPS protocols. * Apply access permissions to all the content in the container. What should you use? A. an access policy B. a shared access signature (SAS) C. Azure Content Delivery Network (CDN) D. access keys
B. a shared access signature (SAS) To configure read access to a container in an Azure Storage account while allowing both HTTP and HTTPS protocols and applying access permissions to all the content in the container, you should use a Shared Access Signature (SAS). Shared Access Signatures (SAS) are used to grant limited access to specific resources in your storage account while maintaining fine-grained control over the allowed operations, including read access. You can create a SAS token with the necessary permissions and then provide this token to the users or applications that need access to the container.
75
You need to create an Azure Storage account named storage1. The solution must meet the following requirements: * Support Azure Data Lake Storage. * Minimize costs for infrequently accessed data. * Automatically replicate data to a secondary Azure region. Which three options should you congure for storage1? A. zone-redundant storage (ZRS) B. the Cool access tier C. geo-redundant storage (GRS) D. the Hot access tier E. hierarchical namespace
B. the Cool access tier C. geo-redundant storage (GRS) E. hierarchical namespace B. The Cool access tier: The Cool access tier is suitable for infrequently accessed data and offers lower storage costs compared to the Hot access tier. C. Geo-redundant storage (GRS): Geo-redundant storage replicates data to a secondary Azure region, providing data redundancy and disaster recovery capabilities. E. Hierarchical namespace: The hierarchical namespace is required for Azure Data Lake Storage, as it enables the storage account to support the data lake's file system structure.
76
You have an Azure Storage account named storage1 that contains two containers named container1 and container2. Blob versioning is enabled for both containers. You periodically take blob snapshots of critical blobs. You create the following lifecycle management policy. "version" : { "tierToCool" : { "daysAfterCreationGreaterThan" : 15 }, "tierToArchive" : { "daysAfterLastTierChangeGreaterThan" : 7, "daysAfterCreationGreaterThan" : 30 } } Yes/No A blob snapshot automatically moves to the Cool access tier after 15 days. A blob version in container2 automatically moves to the Archive access tier after 30 days. A rehydrated version automatically moves to the Archive access tier after 30 days.
A blob snapshot automatically moves to the Cool access tier after 15 days. - Yes (or No - no setting for snapshot) A blob version in container2 automatically moves to the Archive access tier after 30 days. - No A rehydrated version automatically moves to the Archive access tier after 30 days. - No The policy applies to block blob types with a prefix of container1/. Since blob snapshots are not explicitly mentioned, they are not covered by this rule. The policy applies only to blobs in container1/ (as per the prefixMatch filter). Blobs in container2 are not included in this lifecycle rule. There is no mention of rehydrated versions in the lifecycle policy. The policy only applies to transitioning blobs from tierToCool or tierToArchive.
77
You have an Azure subscription that contains the storage accounts shown in the following table. Name Kind Performance Replication Access tier storage1 Storage (general purpose v1) Premium Locally-redundant storage (LRS) Not applicable storage2 StorageV2 (general purpose v2) Standard Locally-redundant Cool storage3 StorageV2 (general purpose v2) Standard Read-access geo-redundant storage (RA-GRS) Hot storage4 BlobStorage Premium Locally-redundant storage (LRS) storage (LRS) Hot Which storage account can be converted to zone-redundant storage (ZRS) replication? A. storage1 B. storage2 C. storage3 D. storage4
B. storage2 to convert to ZRS must the Kind be: Standard general-purpose v2 (StorageV2), Premium block blobs (BlockBlobStorage) or Premium file shares (FileStorage) and the Replication is from LRS possible (…from GRS/RA-GRS convert to LRS first) (StorageV2 + LRS) https://learn.microsoft.com/en-us/azure/storage/common/storage-redundancy#supported-storage-account-types https://learn.microsoft.com/en-us/azure/storage/common/redundancy-migration?tabs=portal#replication-change-table
78
You have an Azure subscription that contains the devices shown in the following table. Name Platform Device1 Windows Device2 Ubuntu Linux Device3 macOS Device4 Android On which devices can you install Azure Storage Explorer? A. Device1 only B. Device1 and Device2 only C. Device1 and Device3 only D. Device1, Device2, and Device3 only E. Device1, Device3, and Device4 only
D. Device1, Device2, and Device3 only Azure Storage Explorer it is primarily designed for desktop operating systems like Windows, macOS, and Linux https://learn.microsoft.com/en-us/azure/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows#overview
79
You have an Azure Storage account named storage1 that contains a container named container1. The container1 container stores thousands of image files. You plan to use an Azure Resource Manager (ARM) template to create a blob inventory rule named rule1. You need to ensure that only blobs whose names start with the word finance are stored daily as a CSV file in container1. How should you complete rule1? "blobTypes" : - appendBlob - blockBlob - pageBlob "prefixMatch" : - container1/* - container/finance - finance
"blobTypes" : - blockBlob "prefixMatch" : - container/finance BlockBlob makes most sense for CSV files: https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs For prefixmatch consult: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-faq A prefix match string of container1/sub1/ applies to all blobs in the container named container1 that begin with the string sub1/. For example, the prefix will match blobs named container1/sub1/test.txt or container1/sub1/sub2/test.txt.
80
You have an Azure subscription that contains a storage account named storage1. The storage1 account contains blobs in a container named container1. You plan to share access to storage1. You need to generate a shared access signature (SAS). The solution must meet the following requirements: * Ensure that the SAS can only be used to enumerate and download blobs stored in container1. * Use the principle of least privilege. Which three settings should you enable? Allowed services: [+] Blob Allowed resource types: []Service []Container []Object Allowed permissions: []Read []Write []Delete []List []Add []Create []Update etc.
Allowed resource types: [+]Container Allowed permissions: [+]Read [+]List container: it's the resource type we want to generate SAS for read: this will allow download List: this will allow enumeration or count/listing object: is for when you want to target a specific blob. So it's not necessary for this scenario however it was required in the previous question. Container: "Grants access to the content and metadata of any blob in the container, and to the list of blobs in the container." Source: https://learn.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas#specify-the-signed-resource-field Specifying "Object" additionally would be redundant because it is a subset of "Container". List: "List blobs non-recursively." Read: "Read the content, blocklist, properties, and metadata of any blob in the container or directory. Use a blob as the source of a copy operation." Source: https://learn.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas#specify-permissions
81
You have an Azure subscription. The subscription contains a storage account named storage1 that has the lifecycle management rules shown in the following table. Name Blob prefix If base're last modif. more than (days ago) Then Rule1 container1/ 3 days Move to archive storage Rule2 Not applicable 5 days Move to cool storage Rule3 container2/ 10 days Delete the blob Rule4 container2/ 15 days Move to archive storage On June 1, you store two blobs in storage1 as shown in the following table. Name Location Access tier File1 container1 Hot File2 container2 Hot Yes/No On June 6, File1 will be stored in the Cool access tier. On June 7, File2 will be stored in the Cool access tier. On June 16, File2 will be stored in the Archive access tier.
On June 6, File1 will be stored in the Cool access tier. - No Rule 1 applies and File 1 will be in archive storage. On June 7, File2 will be stored in the Cool access tier. - Yes Rule 2 applies to all files due to the lack of a prefix, and File 2 will be in cool storage. On June 16, File2 will be stored in the Archive access tier. - No Rule 3 applies and File 2 will be deleted.
82
You have an Azure Storage account named contoso2024 that contains the resources shown in the following table. Name Туре Contents container1 Blob container File1 share1 Azure Files share File2 You have users that have permissions for contoso2024 as shown in the following table. Name Permission User1 Reader role User2 Storage Account Contributor role User3 Has an access key for contoso2024 The contoso2024 account is congured as shown in the following exhibit. Allow Blob public access: enabled Allow storage account keys access: disabled Yes/No User1 can read File1. User2 can read File2. User3 can read File1 and File2.
User1 can read File1. - Yes User2 can read File2. - No User3 can read File1 and File2. - No Public access is enabled for blob. Azure Storage Account Contributor role can't access the file share. Access Key is disabled on the storage account. https://learn.microsoft.com/en-us/azure/storage/files/authorize-data-operations-portal#use-your-microsoft-entra-account https://learn.microsoft.com/en-us/azure/storage/blobs/assign-azure-role-data-access?tabs=portal