DP-300 MS Azure Database Administrator Exam Prep Flashcards

1
Q

What Azure storage is recommended for production data files that requires maximum throughput?

A. Premium SSD
B. Ultra Disks
C. Standard SSD

(Prepare to maintain SQL Server based databases on Azure)

A

B. Ultra Disks

Az Disk Type -Max Throughput -Max IOPS -Backing Storage -Use Case
Ultra Disk 2,000 MB/s 160,000 SSD Highest perform workloads
Premium SSD ~900 MB/s Up to 20,000 SSD GenPurp production
Standard SSD ~500 MB/s Up to 6,000 SSD Low cost workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What option should you select to change the autopause setting of a serverless SQL Database?

A. The autopause setting is fixed at one hour and can’t be adjusted.
B. Ensure that there are no active transactions in the database, as the autopause change results in a disconnection.
C. The Azure portal allows you to change the autopause setting at any time.

(Prepare to maintain SQL Server based databases on Azure)

A

C. The Azure portal allows you to change the autopause setting at any time.

For Azure SQL Database (serverless tier), you can configure and change the autopause delay using the Azure portal, PowerShell, or Azure CLI.

The autopause delay determines how long the database remains inactive before pausing to save costs.

You can change this value to anywhere between 1 and 7,200 minutes (i.e., up to 5 days).

Changing the autopause setting does not require downtime, but if the DB is paused, it will auto-resume when accessed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which SQL Server PAAS offering should you choose if you want to restore an on-premise database backup to Azure?

A. Azure SQL Managed Instance
B. Azure SQL Database
C. Both Azure SQL Managed Instance and Azure SQL Database

(Prepare to maintain SQL Server based databases on Azure)

A

A. Azure SQL Managed Instance

If you want to restore an on-premises SQL Server database backup (.bak file) to Azure, your best and only PaaS option that supports native RESTORE FROM DISK is:

Azure SQL Managed Instance
Supports native SQL Server backup/restore functionality (RESTORE DATABASE … FROM URL)

Allows full compatibility with on-prem SQL Server features.

Ideal for migrating legacy or complex SQL workloads to the cloud with minimal changes.

B. Azure SQL Database

Does not support direct restore from .bak files.

You must use bacpac export/import or Data Migration tools, which can be limiting and doesn’t support all SQL Server features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What type of storage offers the lowest latency in Azure?

A. Ultra SSD
B. Premium SSD
C. Standard SSD

(Deploy IAAS solutions with Azure SQL)

A

A. Ultra SSD

Azure Ultra SSD offers the lowest latency of all Azure disk types.

Key Highlights:
Latency: Typically <1 millisecond
Backed by: High-performance NVMe-based SSDs
IOPS & Throughput: Extremely high and customizable per disk

Comparison:
Storage Type Typical Latency Backing Storage Best For
Ultra SSD <1 ms NVMe SSD Critical workloads needing ultra-low latency
Premium SSD ~1–2 ms SSD Production workloads with high IOPS needs
Standard SSD ~1–5 ms SSD General purpose workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

To reduce the cost of an Azure Virtual machine running SQL Server you intend to run full time for three years which option would you choose?

A. Availability Set
B. Azure Reserved VM Instances
C. Pay as you go licensing

(Deploy IAAS solutions with Azure SQL)

A

B. Azure Reserved VM Instances

If you plan to run an Azure VM (especially with SQL Server) full-time for three years, the most cost-effective option is:

Azure Reserved Virtual Machine Instances (RIs)
Why? You commit to a 1- or 3-year term and in return, get significant cost savings — up to 72% compared to pay-as-you-go pricing.

Ideal for predictable, always-on workloads, like a production SQL Server VM.

Can be combined with Azure Hybrid Benefit (if you already own SQL Server licenses) for even more savings.

Why not the others?
A. Availability Set
This is for high availability, not cost savings.
Helps with SLA and fault tolerance, but doesn’t affect pricing.

C. Pay-as-you-go licensing
Flexible, but most expensive over time, especially for long-term use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which option would you choose to spread workloads across data centers in a region?

A. Availability Sets
B. Availability Zones
C. Availability Units

(Deploy IAAS solutions with Azure SQL)

A

B. Availability Zones

To spread workloads across data centers within a region, you should use:
Availability Zones
Each Availability Zone is a physically separate data center within an Azure region.

They offer redundancy, high availability, and fault isolation.

Deploying your resources across multiple zones protects against data center-level failures.

Why not the others?
A. Availability Sets - Used to distribute VMs within a single data center (across fault and update domains), not across multiple data centers.

C. Availability Units - This is not an Azure feature — likely a distractor or incorrect term.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which service performs block level replication of your Azure virtual machine?

A. Azure Site Recovery
B. Azure Backup for SQL Server
C. Availability Groups

(Deploy IAAS solutions with Azure SQL)

A

A. Azure Site Recovery

Azure Site Recovery (ASR) is the service that performs block-level replication of entire Azure Virtual Machines, both within Azure and from on-premises to Azure.

Key Features:
Continuous block-level replication of VMs.

Enables disaster recovery by replicating workloads to a secondary region.

Supports failover and failback with minimal data loss.

Works for Windows and Linux VMs, across Hyper-V, VMware, and Azure VMs.

Why not the others?
B. Azure Backup for SQL Server

Focuses on backing up SQL Server databases, not full VM or block-level replication.

File-level backup, not suited for full VM disaster recovery.

C. Availability Groups

A feature of SQL Server for database-level high availability, not for VM replication.

Doesn’t handle full VM replication or protect other workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You need to migrate a set of databases that use distributed transactions from on-premises SQL Server. Which of the following options should you choose?

A. Azure SQL Database
B. Azure SQL Database Hyperscale
C. Azure SQL Managed Instance

(Deploy PAAS solutions with Azure SQL)

A

C. Azure SQL Managed Instance
Explanation:
If your databases use distributed transactions (e.g., via MSDTC), the only Azure PaaS offering that supports this feature is:

Azure SQL Managed Instance - Supports Microsoft Distributed Transaction Coordinator (MSDTC).

Compatible with on-premises SQL Server features, making it ideal for lift-and-shift migrations.

Maintains support for cross-database and distributed transactions — critical for complex legacy systems.

Why not the others?
A. Azure SQL Database - Does not support distributed transactions across databases or other data sources.

Designed for modern, single-database applications.

B. Azure SQL Database Hyperscale - Provides high scalability but still doesn’t support distributed transactions.

Same limitations as standard Azure SQL Database in this regard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You’re building a new cloud database that you expect to grow to 50TB. Which is the best option for your database?

A. Azure SQL Managed Instance
B. Serverless Azure SQL Database
C. Azure SQL Database Hyperscale

(Deploy PAAS solutions with Azure SQL)

A

C. Azure SQL Database Hyperscale

If you’re expecting your cloud database to grow to 50TB, you need an architecture that can scale well beyond the traditional 4–8TB limits of standard Azure SQL offerings.

Azure SQL Database Hyperscale is the best fit because:
Supports up to 100TB of database size.

Designed for highly scalable and read-intensive workloads.

Offers rapid scaling of compute and storage independently.

Uses a tiered architecture with page servers, a log service, and read replicas.

Fast backups and restores, even at large sizes.

Why not the others?
A. Azure SQL Managed Instance - Max database size is 16TB (as of now). Not suitable if you plan to exceed that.

B. Serverless Azure SQL Database - Designed for infrequent or unpredictable usage patterns, not large or high-volume databases.
Max size is 1–2TB, depending on tier — far below your 50TB need.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You’re building a database for testing purposes that will be used less than 8 hours a day. It’s expeted to be 20GB in size. What is your most cost effective option?

A. Severless
B. Azure SQL Database Elastic Pools
C. Azure SQL Managed Instance

(Deploy PAAS solutions with Azure SQL)

A

A. Serverless

If you’re building a 20GB database for testing that will be used less than 8 hours a day, your most cost-effective option is:

Azure SQL Database (Serverless tier) - Auto-pauses during inactivity to save on compute costs (you only pay for storage while paused).

Auto-scales compute based on workload demand.

Perfect for intermittent or low-usage workloads like dev/test environments.

You pay per vCore-second and per GB of storage, so you’re only charged when it’s active.

Why not the others?
B. Azure SQL Database Elastic Pools - Good for managing multiple active databases with variable usage. Not as cost-effective if you only have one low-usage database.

C. Azure SQL Managed Instance - Enterprise-grade, fully-featured, but overkill and more expensive for small, short-use test databases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How often do differential backups occur with Azure SQL Managed Instance?

A. Every 1 hour
B. Every 12 hours
C. Every 24 hours

(Deploy PAAS solutions with Azure SQL)

A

A. Every 1 hour

In Azure SQL Managed Instance, backups are automated and managed by Azure. Here’s how it works:

Backup Frequency:
Full backups: Every week (or more frequently for smaller databases)

Differential backups: Every 1 hour

Transaction log backups: Every 5–10 minutes

These backups support point-in-time restore (PITR) within the retention period, which can be up to 35 days (configurable).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the most important aspect of assessing databases for potential migration blockers before migrating to Azure SQL offerings?

A. Determining the cost of migration
B. Identifying deprecated features and behavior changes
C. Estimating the time required for migration.

(Assess SQL Server databases for migration to Azure SQL)

A

B. Identifying deprecated features and behavior changes

The most critical aspect when assessing databases for migration to Azure SQL (like Azure SQL Database or Managed Instance) is to:

Identify deprecated features and behavior changes that may cause migration blockers.
Why?
Azure SQL offerings do not support all features available in on-premises SQL Server.

Deprecated or unsupported features (like certain SQL Server Agent jobs, cross-database transactions, or extended stored procedures) can break the application if not addressed.

Behavior differences (e.g., collation, security, server-level features) may require code or configuration changes.

Why not the others?
A. Determining the cost of migration - Important for budgeting, but not a technical blocker.

C. Estimating the time required for migration - Helps with planning, but won’t stop a migration from happening like technical incompatibilities can.

Tools to Help:
Azure Data Migration Assistant (DMA): Detects feature incompatibilities and provides recommendations.

Azure Database Migration Service (DMS): Supports large-scale migrations after blockers are addressed.

Summary:
Before migrating, the top priority is to ensure your database won’t break — so focus on identifying deprecated features and behavior changes first.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the options for running the assessment and Azure recommendation steps at scale?

A. Powershell and Azure CLI
B. Azure Data Studio
C. Azure portal and Azure Storage Explorer

(Assess SQL Server databases for migration to Azure SQL)

A

A. PowerShell and Azure CLI

When you need to run assessments and gather Azure recommendations at scale — for multiple databases or servers — the best tools are:

PowerShell and Azure CLI
Why?
Allow automation and scripting of assessment tasks across many environments.

Can integrate with tools like the Azure Migrate service or Data Migration Assistant (DMA) in batch mode.

Ideal for enterprise-scale migrations, where manual work through GUI tools becomes inefficient.

Why not the others?
B. Azure Data Studio - Great for interactive use and managing individual databases, but not designed for bulk assessments.

C. Azure portal and Azure Storage Explorer - These tools provide UI-based access to services and storage, not bulk assessment or recommendations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What factors are considered when calculating the monthly estimated cost for Azure SQL Database using the Azure SQL Assessment tool?

A. Database size, compute size and storage size
B. Database size, compute size and database activity
C. Compute size, storage size and database activity

(Assess SQL Server databases for migration to Azure SQL)

A

C. Compute size, storage size, and database activity

The Azure SQL Assessment tool (like the Data Migration Assistant or Azure Migrate) estimates monthly cost based on:

Key Cost Factors:
Compute Size - Number of vCores or DTUs needed, depending on the service tier. Driven by performance requirements (CPU, memory, IOPS).

Storage Size - Total storage provisioned for data and backups.
Includes both data and log files.

Database Activity - Read/write patterns, connection load, and workload characteristics. Affects the choice of performance tier (e.g., General Purpose vs Business Critical). Influences whether serverless, provisioned, or hyperscale is more cost-efficient.

Why not the others?
A. Database size, compute size and storage size - Database size overlaps with storage size, and doesn’t factor in workload, which affects compute needs.

B. Database size, compute size and database activity - Missing storage size, which directly affects cost — especially in Hyperscale or Managed Instances.

Summary:
To calculate cost accurately, Azure SQL Assessment tools consider: Compute size + Storage size + Database activity
This ensures a recommendation that aligns with both performance needs and budget.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What ae the five main stages of data platfom modenization?

A. Investigate, analyze, build, deploy and monitor
B. Discover, assess, plan, transfom and validate
C. Design, code, test, deploy and maintain

(Design a SQL Server Migration Stategy)

A

B. Discover, assess, plan, transform, and validate

When modernizing a data platform — especially moving from on-premises to cloud (like Azure SQL) — the process typically follows these five key stages:

  1. Discover - Inventory all data assets (databases, workloads, dependencies). Use tools like Azure Migrate, Data Migration Assistant, etc.
  2. Assess - Evaluate compatibility and identify migration blockers.
    Analyze costs, performance needs, and business impact.
  3. Plan - Define the migration strategy (lift-and-shift, re-platform, or re-architect). Choose the right Azure SQL offering (SQL DB, Managed Instance, etc.). Establish timelines and resource requirements.
  4. Transform - Migrate and/or refactor data and applications as needed. Implement necessary schema or code changes. Optimize for the cloud (e.g., scaling, security, HA setup).
  5. Validate - Test for performance, data integrity, and functionality.
    Ensure successful cutover, user acceptance, and ongoing monitoring.

Why not the others?
A. Investigate, analyze, build, deploy and monitor - Sounds like a software development cycle, not a structured modernization process.

C. Design, code, test, deploy and maintain - Applies more to application development, not data platform migration.

Summary:
For data platform modernization, follow the structured journey of: Discover → Assess → Plan → Transform → Validate
This ensures a smooth, secure, and efficient transition to the cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the purpose of the “Initiate and Discover” stage in a data platform modernization project?

A. To complete a current workload assessment
B. To determine the target platform for the migration
C. To evaluate the current environment and identify any dependencies between applications and databses

(Design a SQL Server Migration Stategy)

A

C. To evaluate the current environment and identify any dependencies between applications and databases

The “Initiate and Discover” stage is the very first step in a data platform modernization project. Its main purpose is to:

Understand what you have — your current data landscape — and uncover dependencies that could impact the migration.

This stage involves:
Inventorying existing databases, applications, and infrastructure

Identifying interconnected systems and data flows

Highlighting any potential blockers or risks

Preparing for the assessment and planning phases

Why not the others?
A. To complete a current workload assessment - This happens in the Assess stage, after discovering what you have.

B. To determine the target platform for the migration - This is part of the Plan stage, which comes after discovery and assessment.

Summary:
In the Initiate and Discover phase, the goal is to evaluate the current environment and identify dependencies — so that you can build a solid foundation for a smooth and well-informed migration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which SQL Server PAAS offeing should you choose if you want to restore an on-premise database backup to Azure?

A. Azure SQL Managed Instance
B. Azure SQL Database
C. Both Azure SQL Database and Azure SQL Managed Instance

(Design a SQL Server Migration Stategy)

A

A. Azure SQL Managed Instance

If you want to restore an on-premises SQL Server database backup (.bak file) directly to Azure, the only PaaS offering that supports this is:

Azure SQL Managed Instance
Why?
Supports native SQL Server backup and restore operations.

You can restore a .bak file from on-prem via Azure Blob Storage.

Offers near-full compatibility with on-prem SQL Server features — great for lift-and-shift migrations.

Why not the others?
B. Azure SQL Database - Does not support direct .bak file restores.
You’d need to use tools like Data Migration Assistant (DMA) or Azure Database Migration Service (DMS) to import schema and data, not just restore backups.

C. Both Azure SQL Database and Azure SQL Managed Instance -
Incorrect — only Managed Instance allows native backup file restoration.

Summary:
To directly restore an on-premises SQL Server database backup (.bak) to Azure, use Azure SQL Managed Instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are some scenarios where SQL Server on Azure Virtual Machine ia a better option than Azure SQL Managed Instance?

A. When you need direct access to the opersting system or the file system, such as installing non-Microsft or custom agents on the same virtual machine alongside SQL Server.
B. When you heavily reply on festures supported by Azure SQL Managed Instance such as FileStream/FileTable, Polybase and cross-instance transactions.
C. When your business requires staying at a particular version of Azure SQL Managed Instance

(Migrate SQL Server workloads to Azure Virtual Machine)

A

A. When you need direct access to the operating system or the file system, such as installing non-Microsoft or custom agents on the same virtual machine alongside SQL Server.

SQL Server on Azure Virtual Machine (VM) is an Infrastructure-as-a-Service (IaaS) offering, meaning you get full control over the OS and SQL Server instance. This makes it a better fit in scenarios like:

Ideal Scenarios for SQL Server on Azure VM:
You need to install custom software, third-party tools, or non-Microsoft agents on the same machine.

You require full OS-level access, including:

Custom scripts and job schedulers.

Special configurations for security or networking.

You want control over patching and SQL Server version upgrades.

Why not the others?
B. When you heavily rely on features supported by Azure SQL Managed Instance such as FileStream/FileTable, Polybase and cross-instance transactions.

This suggests Azure SQL Managed Instance is the right fit, not SQL Server on Azure VM.

C. When your business requires staying at a particular version of Azure SQL Managed Instance

Azure SQL Managed Instance is a PaaS offering and doesn’t support staying on an exact version long-term — it auto-updates over time.

If version control is critical, SQL Server on Azure VM is more appropriate, but the reasoning in this choice is incorrect.

Summary:
Choose SQL Server on Azure VM when you need full control, including the ability to install custom software or access the underlying OS and file system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does the ‘how’ aspect of SQL Server Migration planning encompass?

A. Recognizing the benefits after migration is completed
B. The evaluation of the amount of downtime that the organization is willing to tolerate
C. To create a Windows Server Failover Cluster on the source and target.

(Migrate SQL Server workloads to Azure Virtual Machine)

A

B. The evaluation of the amount of downtime that the organization is willing to tolerate

The “how” aspect of SQL Server migration planning focuses on execution strategy — specifically, how the migration will be performed. A critical part of this is understanding:

Downtime tolerance — how much interruption the business can afford during migration.
This determines whether you’ll use:

Online migration (minimal to no downtime, more complex)
Offline migration (simple, but involves planned downtime)
Staged migration or replication techniques

It also influences which migration tools and services (e.g., Azure Database Migration Service, Log Shipping, or Replication) you might choose.

Why not the others?
A. Recognizing the benefits after migration is completed - That’s part of the “why” of migration, not how.

C. To create a Windows Server Failover Cluster on the source and target - That’s a specific high availability configuration, not a general part of migration planning.

Summary:
The “how” in SQL Server migration planning involves defining how the migration will occur, with a key focus on downtime tolerance, which drives the migration method and tooling decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the purpose of using SQL Server Always On Availability Group (AG) feature during migration?

A. To create an availability group listener on the source and target
B. To minimize downtime during migration by extending an on-premise database into the cloud.
C. To create a Windows Server Failover Cluster on the source and target

(Migrate SQL Server workloads to Azure Virtual Machine)

A

B. To minimize downtime during migration by extending an on-premises database into the cloud.

The SQL Server Always On Availability Group (AG) feature is often used during migration to:

Extend the on-premises environment to Azure and reduce downtime.
How it works:
You set up an Availability Group that includes both the on-premises SQL Server and the Azure-based SQL Server.

Real-time replication/synchronization of data occurs.

When ready, you perform a planned failover to Azure with minimal or no downtime.

Ideal for business-critical applications where even short outages aren’t acceptable.

Why not the others?
A. To create an availability group listener on the source and target -
The listener is a component of AGs used for routing connections, but it’s not the main purpose of using AGs during migration.

C. To create a Windows Server Failover Cluster on the source and target - While AGs use WSFC for high availability, this option focuses on the infrastructure, not the purpose during migration.

Summary:
The purpose of using Always On AG during migration is to extend your database into Azure while keeping it in sync, enabling a seamless cutover with minimal downtime.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the advantages of backing up to block blob instead of page blob in Azure Blob Storage for SQL Server 2016 and later versions?

A. Block blob offers safer authorization, improved backup performance and support for larger database backups compared to page blob.
B. Page blob offers safer authorization compared to block blob
C. Block blobs offers better suppot for smaller database backups compared to page blob

(Migrate SQL Server workloads to Azure Virtual Machine)

A

A. Block blob offers safer authorization, improved backup performance, and support for larger database backups compared to page blob.

Starting with SQL Server 2016, Microsoft introduced native support for backing up directly to Azure block blobs, which offers several advantages over using page blobs, especially for cloud-first or hybrid backup strategies.

Advantages of Block Blobs:
Safer Authorization

Uses Shared Access Signatures (SAS), which are more secure and flexible than the traditional storage account keys used by page blobs.

Improved Backup Performance

Block blobs are optimized for sequential write operations, which align better with how backups are written, resulting in faster backup throughput.

Larger Backup Support

Each block blob can store up to 190.7 TiB (with chunked uploads), which is significantly more than the 1 TiB limit of a page blob.

Why not the others?
B. Page blob offers safer authorization compared to block blob -
Incorrect — Block blob with SAS offers more secure and manageable access.

Block blobs offer better support for smaller database backups compared to page blob - Not the key advantage — the focus is on performance, security, and scale, not just “small” backups.

Summary:
When backing up SQL Server 2016+ to Azure, block blobs are preferred due to better security, performance, and scalability over page blobs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the purpose of entering credentials in the self-hosted integration runtime?

A. To connect to the source and migrate data to the target
B. To delete the source data
C. To create a backup of the source data

(Migrate SQL Server workloads to Azure SQL Database)

A

A. To connect to the source and migrate data to the target

The self-hosted integration runtime (SHIR) is used in services like Azure Data Factory or Azure Synapse when you need to access on-premises data sources (e.g., SQL Server, Oracle, file systems) from the cloud.

Why credentials are required:
To authenticate and connect to the source system (e.g., on-prem SQL Server).

So that the integration runtime can read data from the source and write it to the target (like Azure SQL Database or Blob Storage).

It’s part of the secure data movement process in hybrid data integration scenarios.

Why not the others?
B. To delete the source data

The integration runtime doesn’t automatically delete source data unless explicitly configured — that’s not its main purpose.

C. To create a backup of the source data

While it can copy data, it doesn’t perform native database backups — that’s handled by SQL Server or backup tools.

Summary:
You enter credentials in the self-hosted integration runtime so it can securely connect to your source environment and migrate or copy data to your target system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the role of self-hosted integrated runtime in data migration?

A. To connect to the source and target and facilitate data migration
B. To store the migrated data
C. To analyze the migrated data

(Migrate SQL Server workloads to Azure SQL Database)

A

A. To connect to the source and target and facilitate data migration

The Self-hosted Integration Runtime (SHIR) plays a crucial role in hybrid data migration scenarios, especially when dealing with on-premises data sources. It’s used with tools like Azure Data Factory or Azure Synapse Analytics.

Role of SHIR:
Connects to on-premises or non-Azure data sources (like SQL Server, Oracle, file shares, etc.)

Transfers data securely to cloud targets such as Azure SQL Database, Blob Storage, or Data Lake

Acts as a bridge between your on-prem environment and Azure

Why not the others?
B. To store the migrated data - SHIR doesn’t store data. It moves it from source to target — storage is handled by services like Azure SQL or Blob Storage.

C. To analyze the migrated data - Data analysis is done by tools like Power BI, Azure Synapse, or Azure Machine Learning, not the SHIR.

Summary:
The Self-hosted Integration Runtime is responsible for connecting to both the source and the target systems and facilitating the data migration process — especially in hybrid cloud environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the significance of selecting tables during data migration?

A. It helps in migrating only the required data from the source to the target
B. It helps in creating a backup of the source data
C. It helps in analyzing the migrateed data

(Migrate SQL Server workloads to Azure SQL Database)

A

A. It helps in migrating only the required data from the source to the target

When planning a data migration, selecting specific tables allows you to:

Migrate only what you need
Avoid unnecessary data bloat in the target system.

Focus on critical tables relevant to current applications or workloads.

Improve performance and efficiency of the migration process.

Reduce migration time and cost.

This is especially useful when:

Migrating from a legacy system with a large, cluttered schema.
Performing a phased migration.
Complying with data governance or compliance policies.

Why not the others?
B. It helps in creating a backup of the source data - Data migration is different from backup. Backups are full or incremental copies of the database for recovery purposes.

C. It helps in analyzing the migrated data - Analysis is typically done after migration using analytics tools, not during table selection.

Summary:
Selecting tables during migration ensures that only the necessary data is transferred, making the process faster, cheaper, and more efficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Can transactional replication be configured from the Azure Portal? A. No, transactional replication can only be configured through SQL Server Management Studio B. Yes, transactional replication can be configured from the Azure Portal C. No, transactional replication can't be configured from the Azure Portal (Migrate SQL Server workloads to Azure SQL Database)
C. No, transactional replication can't be configured from the Azure Portal. Transactional replication requires tools like SQL Server Management Studio (SSMS) or Transact-SQL commands for configuration Transactional replication is a method used in SQL Server to replicate data from a publisher database to one or more subscriber databases. Here's how it works and how to configure it: How Transactional Replication Works 1. **Snapshot Creation**: Initially, a snapshot of the publication database objects and data is taken. 2. **Incremental Changes**: After the snapshot, any changes made at the publisher (like inserts, updates, or deletes) are delivered to the subscriber in near real-time. 3. **Consistency**: Changes are applied to the subscriber in the same order and transaction boundaries as they occurred at the publisher, ensuring transactional consistency. Configuration Steps Transactional replication cannot be configured directly from the Azure Portal. Instead, you use tools like SQL Server Management Studio (SSMS) or Transact-SQL commands. Here's a brief overview: 1. **Prepare the Publisher**: Install SQL Server and the AdventureWorks sample database. Ensure the SQL Server Agent is running. 2. **Create a Publication**: Use SSMS to define the articles (tables or objects) you want to replicate. 3. **Set Up the Distributor**: Configure the distribution database to manage replication jobs. 4. **Add Subscribers**: Connect the subscriber databases and validate the subscription. 5. **Monitor Replication**: Use SSMS to monitor latency and ensure data consistency.
26
An Azure SQL Database Managed Instance can be categorized as a _______ cloud service.
Platform As A Service (PAAS)
27
The compatibility level settings can be changed from?
The individual database properties page
28
The storage type that offers the lowest latency in Azure is?
Ultra Disk
29
The migration tool that can be used to ASSESS and MIGRATE your databases from an on-premisis SQL Server to an Azure VM is?
Data Migration Assistant
30
To reduce the cost of an Azure SQL Server VM that will run fulltime for 3 years you should use?
Azure Reserved VM Instances
31
In order to migrate a set of databases that use distributed transactions from on-premise SQL Server the target database platform that you will use in MS Azure is:
Azure SQL Managed Instance
32
In sql server what are distributed transactions
In SQL Server, distributed transactions are transactions that span multiple databases or multiple servers. They allow operations across different data sources (like another SQL Server instance or even a non-SQL Server system) to be treated as a single atomic unit—either everything commits, or everything rolls back, just like a local transaction. How It Works Distributed transactions are managed using the Microsoft Distributed Transaction Coordinator (MSDTC), which ensures the two-phase commit protocol is followed: Prepare phase: All involved systems agree they’re ready to commit. Commit phase: If all say yes, then commit. If any say no, rollback. Example Scenario Say you have: A local SQL Server instance (ServerA) A remote SQL Server instance (ServerB) You want to insert a row into a table on both servers, and either both succeed or both fail: sql Copy Edit BEGIN DISTRIBUTED TRANSACTION; INSERT INTO ServerA.Database1.dbo.Table1 (Col1) VALUES ('Test1'); INSERT INTO ServerB.Database2.dbo.Table2 (Col2) VALUES ('Test2'); COMMIT TRANSACTION; If one INSERT fails, the whole transaction rolls back. Key Points Uses linked servers to access remote data. Requires MSDTC service running and properly configured on all involved servers. Can involve SQL Server, Oracle, or other systems if using compatible providers. Slower than local transactions due to the overhead of coordination. Security & Config Ensure firewall ports for MSDTC are open. Servers must trust each other (usually same domain or trusted domains).
33
The best option to host your new database in the cloud for testing purposes that will be used less than 8 hours a day and is expected to be 20 GB in size is?
Azure SQL Database Serverless
34
The best option to host your new database in the cloud that you expect to grow to 50TB is?
Azure SQL Database Hypescale
35
How could you upgrade a major version of the Azure datbase for MYSQL?
create a dump and restore it at the higher version.
36
What is the protocol that is used by Azure Active Directory for Authorization?
OAuth
37
The system databases that stores the information about logins in SQL Server is?
master
38
The role that allows users to create users within a database is?
db_securityadmin
39
The pemission that allows the user to perform any option against a database object is:
Control
40
The feature that allows a user to execute a stored procedure even if he does not have permission to access the tables referenced in the stored procedure is called?
Ownership Chaining Ownership Chaining in SQL Server is a security feature that allows users to access multiple objects (like tables, views, stored procedures) without requiring explicit permissions on each object—as long as all the objects are owned by the same user. Think of it like this: If you own a stored procedure, and that procedure accesses a table you also own, SQL Server won’t re-check whether you have permission on that table. It trusts the chain of ownership. How it works: A user has permission to execute a stored procedure. The stored procedure queries a table. If the procedure and the table have the same owner, SQL Server does not check the user’s permission on the table. If the owners are different, SQL Server will check permissions. Why It Matters Helps simplify permissions by letting you secure at the top level (stored procedures). Lets developers encapsulate logic and protect raw data. Reduces surface area for security mistakes.
41
The security object that is required in oder to enable transparent data encryption is:
Master Key In SQL Server, Transparent Data Encryption (TDE) is a security feature that encrypts the entire database at rest—meaning the data and log files are encrypted on disk, but without changing how you use the database. What "Transparent" Means It’s invisible to applications and users—no changes needed in your queries, connection strings, or stored procedures. SQL Server handles the encryption and decryption automatically in the background. Why Use TDE? Protects data files from unauthorized access if they're stolen (like backup files or physical disks). Helps with regulatory compliance (e.g., GDPR, HIPAA, PCI-DSS). Works at the storage level, unlike column-level encryption which targets specific data. How It Works (High-Level Steps) Create a Database Master Key (DMK) in the master database. Create a Certificate protected by the DMK. Create a Database Encryption Key (DEK) in the target database. Enable TDE for the database using the DEK and certificate. What Gets Encrypted? Data files (.mdf, .ndf) Log files (.ldf) Backups (automatically encrypted if taken from a TDE-enabled DB) Important Notes TDE does not protect data in memory or in transit. Back up the certificate and private key—you’ll need them to restore the database on another server. Minimal performance impact (~2–5%).
42
The feature that prevents membes of the sysadmin role from viewing the values of data in a table is called?
Always Encypted Always Encrypted in SQL Server is a data protection feature that encrypts sensitive data such as credit card numbers or SSNs inside the client application, so the data is never visible in plaintext to SQL Server, including DBAs or admins. Key Idea: Data is encrypted on the client side, sent encrypted to the server, stored encrypted, and only decrypted by the client. Use Case: If you need strong privacy for certain columns (like SSNs, credit cards, PHI), and you don’t fully trust the DB server or admins, use Always Encrypted. How It Works Encryption keys are generated and stored securely outside the database (in Azure Key Vault, Windows certificate store, or a secure store). A client driver (like .NET or JDBC) handles encryption and decryption transparently. SQL Server stores the ciphertext only—no access to plaintext. Types of Encryption Deterministic: Always encrypts the same value to the same ciphertext. Allows equality comparison (WHERE, JOIN) Vulnerable to pattern analysis. Randomized: Produces different ciphertexts for the same value. Stronger security No search, filter, or join support. Key Differences vs Transparent Data Encryption (TDE) Feature Always Encrypted Transparent Data Encryption (TDE) Encrypts at Column level Database level Data in memory Encrypted Decrypted Data in transit Encrypted Not encrypted SQL Server can see? No Yes Granularity Fine-grained (columns) Coarse (entire DB) Limitations Not all SQL Server operations are supported (e.g., LIKE, range queries). Requires compatible client drivers. Can’t decrypt data on the server side (e.g., for indexing, triggers).
43
The featue that provides a private IP address for an Azure SQL database is called?
Private Link
44
The technique that is used to create database firewall rules in Azure SQL database?
Executing a TSQL statement
45
An example of the threats that can be analyzed by Advanced Threat Protection that is commonly associated with Dynamic SQL is called?
SQL Injection
46
The Perfomance Monitor Counter that reflects how long SQL Server expects to retain data in memory is called?
Page Life Expectancy
47
In order to see the sizes of your SQL Server databases running in an Azure VM you should use?
The SQL VM Resource Povider
48
That isolation level that should be used if you want to prevent users from reading data from blocking users writing data is?
Read Committed Snapshot Isolation
49
The DMV that shows sessions holding locks is called?
Sys.dm_tran_locks
50
The Query Store catalog view that provides the query ID to allow for tracking is?
Sys.query_store_queries Query Store in SQL Server is like a "black box recorder" for your database's performance—it captures and tracks the history of query execution plans and runtime statistics, so you can analyze performance issues, identify regressions, and force good plans if needed. Purpose: Helps DBAs and developers answer: "Why did this query suddenly slow down?" "What execution plan was used last week?" "Can I force a better plan for this query?" What It Tracks: Query text Execution plans (good or bad) Runtime stats (CPU, duration, logical reads, etc.) Plan changes over time Use Cases: Troubleshooting regressions (e.g., after a deployment) Forcing a stable execution plan: sql Copy Edit EXEC sp_query_store_force_plan @query_id = 123, @plan_id = 456; Performance tuning based on long-term trends Comparing plan performance over different time windows Available In: SQL Server 2016 and later Azure SQL Database Pro Tips: It’s lightweight, but does use space—watch your Query Store size settings. Combine with Automatic Plan Correction (in Azure or SQL Server 2022) to auto-fix regressions.
51
The storage type that should be used in conjunction with Azure VMs for SQL Server data files is?
Disk Storage
52
The intelligent query processing feature that allows for faster calculations of a large number of rows is called?
Batch Mode on Row Store
53
The component of resource governor that allows you to configure limits on system resources is called?
Resource Pools
54
The Microsoft Azure data platform that supports automatic index management is called?
Azure SQL Database
55
The DMV that can be used to show the status of a plan updated by automatic tuning is called?
sys.dm_db_tuning_recommendations
56
The type of execution plan that is stored in the plan cache is called?
Estimated execution plan
57
The DMV that should be used to find the index utilization is called?
sys.dm_db_index_usage_stats
58
The database design type that should be used for a data warehouse when you want to reduce the data volume of your dimensions is called?
Snowflake Schema
59
The compression type that offers the highest level of compression is:
Columnstore Archival
60
The type of index that is best used on a datawarehouse fact table is:
Clustered Columnstore
61
The DMV that can be used to capture the last Actual Execution Plan for a given cached query plan is:
sys.dm_exec_quert_plan_stats
62
In order to pass the region for a resource group in the deployment template, we should include ________ in the ARM template:
Parameter
63
In order to configure the SQL Server Agent can send e-mail we should first configure:
A mail profile
64
The Extended Events target that writes only to memory and is not persisted is called:
Ring Buffer
65
The index operation that recalculates the statistics of an index is called?
Rebuild
66
What are Extended Events in SQL Server?
Extended Events in SQL Server are a lightweight and highly configurable performance monitoring system. They allow you to collect detailed data about the internal operations of the database engine, making them useful for troubleshooting and performance analysis. Introduced in SQL Server 2008, Extended Events replaced the older SQL Profiler due to its lower impact on system performance. With Extended Events, you can monitor various aspects of SQL Server, such as query performance, deadlocks, and long-running queries. You can create event sessions to specify which events to track and how to store the collected data. SQL Server Management Studio (SSMS) provides a graphical interface for managing Extended Events, making it easier to create, modify, and analyze event sessions
67
The unit of execution for the Azure Automation Account is called:
Runbook
68
The SQL Elastic Job scope is configured as:
Target Group
69
RPO can be defined as:
The point to which data needs to be recovered after a failure.
70
The Microsoft Azure component that needs to be configured for the listener in an AG to work properly is:
A load balancer
71
The WSFC in Microsoft Azure for AGs and FCIs can be created using:
Powershell
72
The Microsoft Azure feature that allows you to test disaster recovery without bringing down your production system is called?
Azure Site Recovery
73
Explain Azure Arc:
Azure Arc is a solution from Microsoft that extends Azure's management and governance capabilities to resources outside of Azure, including on-premises environments, other cloud platforms, and edge devices. It provides a unified platform to manage diverse infrastructure and applications, enabling consistent operations across hybrid and multi-cloud environments. Key Features of Azure Arc: - Centralized Management: You can manage servers, virtual machines, Kubernetes clusters, and databases as if they were running in Azure. - Hybrid Cloud Integration: Azure Arc enables you to bring Azure services like Azure Policy, Azure Monitor, and Microsoft Defender for Cloud to non-Azure resources. - Azure Data Services: Run Azure SQL Managed Instance and PostgreSQL on any infrastructure, with benefits like elastic scaling and automated updates. - Kubernetes Management: Attach and govern Kubernetes clusters anywhere, using tools like GitOps for configuration management. Azure Arc is particularly useful for organizations with complex environments spanning multiple clouds and on-premises setups. It simplifies governance, enhances security, and supports modern DevOps practices.
74
What two type of connection are available to connect Azure SQL Managed Instance to an on-premise application:
Azure Express Route or a VPN Gateway
75
What tool enables custom migrations of databases from on-premise SQL Servers to SQL Managed Instance in the cloud?
Log Replay Service (LRS) is a tool that enables custom migrations of databases from on-premises SQL Servers to SQL Managed Instance in the cloud. It uses log shipping technology and is useful in cases where more control is needed, when there's little tolerance for downtime, or when Azure Data Migration Service can't be used. LRS can be used directly with PowerShell, CLI cmdlets, or API, to manually build and orchestrate database migrations to SQL Managed Instance. Some of the reasons to consider using LRS include: More control over the database migration project Little tolerance for downtime on migration cutover Inability to install DMS executable in the environment Lack of file access to database backups Inability to open networking ports from the environment to Azure
76
What are the two migration modes available for Log Replay Service?
Mode Description Recommended for Backup Chain Availability 1. Autocomplete Automatically finishes the migration when the last backup file is restored Passive workloads Entire backup chain must be available in advance 2. Continuous Continuously scans for new backup files and restores them, allowing for data catch-up Active workloads Backup chain can be added during migration Regardless of the mode, plan to complete the migration within 30 days, as the LRS job will be automatically canceled after this time.
77
You have SQL Server on an Azure Virtual Machine that contains a database named DB1. DB1 contains a table named CustomerPII. You need to record whenever users query the CustomerPII table. Which two options should you enable? Each correct answer presents part of the solution. Each correct selection is worth one point. A. server audit specification B. SQL Server Audit C. database audit specification D. a server principal
A. server audit specification -useful when there is a pool of PII databases B. SQL Server Audit -Correct C. database audit specification -Correct D. a server principal -There is a Service Principal not a Server Principal
78
You have an Azure Virtual Machine based on a custom image named VM1. VM1 hosts an instance of MS SQL Server 2019 Standard. You need to automate the maintenance of VM1 to meet the following requirements. 1. Automate the patching of SQL Server and Windows Server. 2. Automate full database backups and transaction log backups of the databases on VM1. 3. minimize administrative effort. What should you do first? A. Enable a system assigned managed identity for VM1 B. Register VM1 to the Microsoft.SQL resource provider C. Install an Azure virtual machine Desired State Configuration (DSC) extension on VM1 D. Register VM1 to the Microsoft Virtual Machine Resource provider
D. Register VM1 to the Microsoft Virtual Machine Resource provider
79
What is the Microsoft Virtual Machine Resource Provider?
The Microsoft Virtual Machine Resource Provider is part of Azure's infrastructure that enables you to manage virtual machines and related resources. It operates within the Azure Resource Manager framework, which is responsible for deploying, managing, and organizing resources in Azure. Key Functions: - Namespace: The resource provider for virtual machines is identified as Microsoft.Compute. It includes operations for creating, updating, and deleting virtual machines. - Resource Types: It supports various resource types, such as virtual machines, virtual machine scale sets, and images. - Integration: It works seamlessly with other Azure services, allowing you to configure networking, storage, and security settings for your virtual machines. This provider is essential for deploying and managing virtual machines in Azure, whether you're setting up a single VM or scaling out with a cluster. If you'd like to explore more, you can find detailed information here.
80
You receive alerts from Azure Monitor for an Azure SQL Database. You need to reduce the number of alerts. You must only receive alerts if there is a significant change in usage patterns for an extended pattern. Which two actions should you perform? A. Set threshold sensitivity to high B. Set the Alert Logic Threshold to Dynamic C. Set the Alert Logic Threshold to Static D. Set the threshold sensitivity to low E. Set Force Plan to on
B. Set the Alert Logic Threshold to Dynamic D. Set the threshold sensitivity to low
81
You have an Azure SQL Database named sqldb1. You need to minimize the amount of space by the data and log files of sqldb1. What should you do? A. DBCC SHRINKDATABASE B. sp_clean_db_free_space C. sp_clean_db_file_free_space D. DBCC SHRINKFILE
A. DBCC SHRINKDATABASE -Correct need to shrink all the db data and log files B. sp_clean_db_free_space -Wrong C. sp_clean_db_file_free_space -Wrong D. DBCC SHRINKFILE -Wrong, need to shrink more than one db data or log file
82
You have an Azure SQL Database server named sqlsrv1 that hosts 10 Azure SQL databases. The databases perform slower than expected. You need to identify whether the performance issue relates to the TempDB by Azure SQL Databases on sqlsrv1. What should you do? A. Run Query Store based queries B. Review information provided by SQL Server Profiler based traces C. Review information provided by Query Performance Insight D. Run dynamic management view based queries.
D. Run dynamic management view based queries. -Can produce information concerning TempDB contention.
83
You have an Azure SQL Database named sqldb1. You need to minimize the possibility of Query Store transitioning to a read only state. What should you do? A. Double the value of the Data Flush Interval B. Decrease by half the value of Data Flush Interval C. Double the value of Statistics Collection Interval D. Decrease by half the value of Statistics Collection Interval
B. Decrease by half the value of Data Flush Interval -Since SQL Server 2016 Query Store default is ON. Allowing the memory alotted for Query Store to fill. The Data Flush will reduce the amout of data Query Store can hold in memory.
84
You have SQL Server 2019 on an Azure Virtual Machine that runs Windows Server 2019. The virtual machine has 4 CPUs and 28 GB of memory. You scale up the virtual machine to 16vCPUs and 64GB of memory. You need to provide the lowest latency for TempDB. What is the total number of data files that TempDB should contain? A. 2 B. 4 C. 8 D. 64
D. 64 -For low latency TempDb should be confingured for at least one file per CPU, increasing by multiples of 4. Since we will be running 16vCPUs the next higher availble number of files, with a miltiple of 4, is 64.
85
You have 50 Azure SQL Databases. You need to notify the database owner when the database settings such as database size and pricing tier are modified in Azure. What should you do? A. Create a diagnostic setting for the activity log that has the Security log enabled B. For the database create a diagnostic setting that has the Instance AndAppAdvanced metric enabled C. Create an alert rule that uses a Metric signal type D. Create an Alert Rule that uses an Activity Log signal type.
D. Create an Alert Rule that uses an Activity Log signal type. -The Activity Log will record increases in database size and pricing Tier
86
You have several Azure SQL databases on the same Azure SQL database server in a Resourse Group named ResourceGroup1. You must be alerted when CPU usage exceeds 80 percent for any database. The solution must apply to any additional databases that are created on the Azure SQL Server. Which resource type should you use to create that alert? A. Resource Groups B. SQL Servers C. SQL Databases D. SQL Virtual Machines
C. SQL Databases
87
You have SQL Server 2019 on an Azure Virtual Machine that runs Windows Server 2019. The virtual machine has 4 CPUs and 28 GB of memory. You scale up the virtual machine to 8vCPUs and 64GB of memory. You need to provide the lowest latency for TempDB. What is the total number of data files that TempDB should contain? A. 2 B. 4 C. 8 D. 64
C. 8 -The tumbrule for TempDB Data Files is 8 files for 1-8 CPUs. Since the machaine is scaled up to 8vCPUs there is no need to add more than 8 TempDB files.