DP-300 MS Azure Database Administrator Exam Prep Flashcards
What Azure storage is recommended for production data files that requires maximum throughput?
A. Premium SSD
B. Ultra Disks
C. Standard SSD
(Prepare to maintain SQL Server based databases on Azure)
B. Ultra Disks
Az Disk Type -Max Throughput -Max IOPS -Backing Storage -Use Case
Ultra Disk 2,000 MB/s 160,000 SSD Highest perform workloads
Premium SSD ~900 MB/s Up to 20,000 SSD GenPurp production
Standard SSD ~500 MB/s Up to 6,000 SSD Low cost workloads
What option should you select to change the autopause setting of a serverless SQL Database?
A. The autopause setting is fixed at one hour and can’t be adjusted.
B. Ensure that there are no active transactions in the database, as the autopause change results in a disconnection.
C. The Azure portal allows you to change the autopause setting at any time.
(Prepare to maintain SQL Server based databases on Azure)
C. The Azure portal allows you to change the autopause setting at any time.
For Azure SQL Database (serverless tier), you can configure and change the autopause delay using the Azure portal, PowerShell, or Azure CLI.
The autopause delay determines how long the database remains inactive before pausing to save costs.
You can change this value to anywhere between 1 and 7,200 minutes (i.e., up to 5 days).
Changing the autopause setting does not require downtime, but if the DB is paused, it will auto-resume when accessed.
Which SQL Server PAAS offering should you choose if you want to restore an on-premise database backup to Azure?
A. Azure SQL Managed Instance
B. Azure SQL Database
C. Both Azure SQL Managed Instance and Azure SQL Database
(Prepare to maintain SQL Server based databases on Azure)
A. Azure SQL Managed Instance
If you want to restore an on-premises SQL Server database backup (.bak file) to Azure, your best and only PaaS option that supports native RESTORE FROM DISK is:
Azure SQL Managed Instance
Supports native SQL Server backup/restore functionality (RESTORE DATABASE … FROM URL)
Allows full compatibility with on-prem SQL Server features.
Ideal for migrating legacy or complex SQL workloads to the cloud with minimal changes.
B. Azure SQL Database
Does not support direct restore from .bak files.
You must use bacpac export/import or Data Migration tools, which can be limiting and doesn’t support all SQL Server features.
What type of storage offers the lowest latency in Azure?
A. Ultra SSD
B. Premium SSD
C. Standard SSD
(Deploy IAAS solutions with Azure SQL)
A. Ultra SSD
Azure Ultra SSD offers the lowest latency of all Azure disk types.
Key Highlights:
Latency: Typically <1 millisecond
Backed by: High-performance NVMe-based SSDs
IOPS & Throughput: Extremely high and customizable per disk
Comparison:
Storage Type Typical Latency Backing Storage Best For
Ultra SSD <1 ms NVMe SSD Critical workloads needing ultra-low latency
Premium SSD ~1–2 ms SSD Production workloads with high IOPS needs
Standard SSD ~1–5 ms SSD General purpose workloads
To reduce the cost of an Azure Virtual machine running SQL Server you intend to run full time for three years which option would you choose?
A. Availability Set
B. Azure Reserved VM Instances
C. Pay as you go licensing
(Deploy IAAS solutions with Azure SQL)
B. Azure Reserved VM Instances
If you plan to run an Azure VM (especially with SQL Server) full-time for three years, the most cost-effective option is:
Azure Reserved Virtual Machine Instances (RIs)
Why? You commit to a 1- or 3-year term and in return, get significant cost savings — up to 72% compared to pay-as-you-go pricing.
Ideal for predictable, always-on workloads, like a production SQL Server VM.
Can be combined with Azure Hybrid Benefit (if you already own SQL Server licenses) for even more savings.
Why not the others?
A. Availability Set
This is for high availability, not cost savings.
Helps with SLA and fault tolerance, but doesn’t affect pricing.
C. Pay-as-you-go licensing
Flexible, but most expensive over time, especially for long-term use.
Which option would you choose to spread workloads across data centers in a region?
A. Availability Sets
B. Availability Zones
C. Availability Units
(Deploy IAAS solutions with Azure SQL)
B. Availability Zones
To spread workloads across data centers within a region, you should use:
Availability Zones
Each Availability Zone is a physically separate data center within an Azure region.
They offer redundancy, high availability, and fault isolation.
Deploying your resources across multiple zones protects against data center-level failures.
Why not the others?
A. Availability Sets - Used to distribute VMs within a single data center (across fault and update domains), not across multiple data centers.
C. Availability Units - This is not an Azure feature — likely a distractor or incorrect term.
Which service performs block level replication of your Azure virtual machine?
A. Azure Site Recovery
B. Azure Backup for SQL Server
C. Availability Groups
(Deploy IAAS solutions with Azure SQL)
A. Azure Site Recovery
Azure Site Recovery (ASR) is the service that performs block-level replication of entire Azure Virtual Machines, both within Azure and from on-premises to Azure.
Key Features:
Continuous block-level replication of VMs.
Enables disaster recovery by replicating workloads to a secondary region.
Supports failover and failback with minimal data loss.
Works for Windows and Linux VMs, across Hyper-V, VMware, and Azure VMs.
Why not the others?
B. Azure Backup for SQL Server
Focuses on backing up SQL Server databases, not full VM or block-level replication.
File-level backup, not suited for full VM disaster recovery.
C. Availability Groups
A feature of SQL Server for database-level high availability, not for VM replication.
Doesn’t handle full VM replication or protect other workloads.
You need to migrate a set of databases that use distributed transactions from on-premises SQL Server. Which of the following options should you choose?
A. Azure SQL Database
B. Azure SQL Database Hyperscale
C. Azure SQL Managed Instance
(Deploy PAAS solutions with Azure SQL)
C. Azure SQL Managed Instance
Explanation:
If your databases use distributed transactions (e.g., via MSDTC), the only Azure PaaS offering that supports this feature is:
Azure SQL Managed Instance - Supports Microsoft Distributed Transaction Coordinator (MSDTC).
Compatible with on-premises SQL Server features, making it ideal for lift-and-shift migrations.
Maintains support for cross-database and distributed transactions — critical for complex legacy systems.
Why not the others?
A. Azure SQL Database - Does not support distributed transactions across databases or other data sources.
Designed for modern, single-database applications.
B. Azure SQL Database Hyperscale - Provides high scalability but still doesn’t support distributed transactions.
Same limitations as standard Azure SQL Database in this regard.
You’re building a new cloud database that you expect to grow to 50TB. Which is the best option for your database?
A. Azure SQL Managed Instance
B. Serverless Azure SQL Database
C. Azure SQL Database Hyperscale
(Deploy PAAS solutions with Azure SQL)
C. Azure SQL Database Hyperscale
If you’re expecting your cloud database to grow to 50TB, you need an architecture that can scale well beyond the traditional 4–8TB limits of standard Azure SQL offerings.
Azure SQL Database Hyperscale is the best fit because:
Supports up to 100TB of database size.
Designed for highly scalable and read-intensive workloads.
Offers rapid scaling of compute and storage independently.
Uses a tiered architecture with page servers, a log service, and read replicas.
Fast backups and restores, even at large sizes.
Why not the others?
A. Azure SQL Managed Instance - Max database size is 16TB (as of now). Not suitable if you plan to exceed that.
B. Serverless Azure SQL Database - Designed for infrequent or unpredictable usage patterns, not large or high-volume databases.
Max size is 1–2TB, depending on tier — far below your 50TB need.
You’re building a database for testing purposes that will be used less than 8 hours a day. It’s expeted to be 20GB in size. What is your most cost effective option?
A. Severless
B. Azure SQL Database Elastic Pools
C. Azure SQL Managed Instance
(Deploy PAAS solutions with Azure SQL)
A. Serverless
If you’re building a 20GB database for testing that will be used less than 8 hours a day, your most cost-effective option is:
Azure SQL Database (Serverless tier) - Auto-pauses during inactivity to save on compute costs (you only pay for storage while paused).
Auto-scales compute based on workload demand.
Perfect for intermittent or low-usage workloads like dev/test environments.
You pay per vCore-second and per GB of storage, so you’re only charged when it’s active.
Why not the others?
B. Azure SQL Database Elastic Pools - Good for managing multiple active databases with variable usage. Not as cost-effective if you only have one low-usage database.
C. Azure SQL Managed Instance - Enterprise-grade, fully-featured, but overkill and more expensive for small, short-use test databases.
How often do differential backups occur with Azure SQL Managed Instance?
A. Every 1 hour
B. Every 12 hours
C. Every 24 hours
(Deploy PAAS solutions with Azure SQL)
A. Every 1 hour
In Azure SQL Managed Instance, backups are automated and managed by Azure. Here’s how it works:
Backup Frequency:
Full backups: Every week (or more frequently for smaller databases)
Differential backups: Every 1 hour
Transaction log backups: Every 5–10 minutes
These backups support point-in-time restore (PITR) within the retention period, which can be up to 35 days (configurable).
What is the most important aspect of assessing databases for potential migration blockers before migrating to Azure SQL offerings?
A. Determining the cost of migration
B. Identifying deprecated features and behavior changes
C. Estimating the time required for migration.
(Assess SQL Server databases for migration to Azure SQL)
B. Identifying deprecated features and behavior changes
The most critical aspect when assessing databases for migration to Azure SQL (like Azure SQL Database or Managed Instance) is to:
Identify deprecated features and behavior changes that may cause migration blockers.
Why?
Azure SQL offerings do not support all features available in on-premises SQL Server.
Deprecated or unsupported features (like certain SQL Server Agent jobs, cross-database transactions, or extended stored procedures) can break the application if not addressed.
Behavior differences (e.g., collation, security, server-level features) may require code or configuration changes.
Why not the others?
A. Determining the cost of migration - Important for budgeting, but not a technical blocker.
C. Estimating the time required for migration - Helps with planning, but won’t stop a migration from happening like technical incompatibilities can.
Tools to Help:
Azure Data Migration Assistant (DMA): Detects feature incompatibilities and provides recommendations.
Azure Database Migration Service (DMS): Supports large-scale migrations after blockers are addressed.
Summary:
Before migrating, the top priority is to ensure your database won’t break — so focus on identifying deprecated features and behavior changes first.
What are the options for running the assessment and Azure recommendation steps at scale?
A. Powershell and Azure CLI
B. Azure Data Studio
C. Azure portal and Azure Storage Explorer
(Assess SQL Server databases for migration to Azure SQL)
A. PowerShell and Azure CLI
When you need to run assessments and gather Azure recommendations at scale — for multiple databases or servers — the best tools are:
PowerShell and Azure CLI
Why?
Allow automation and scripting of assessment tasks across many environments.
Can integrate with tools like the Azure Migrate service or Data Migration Assistant (DMA) in batch mode.
Ideal for enterprise-scale migrations, where manual work through GUI tools becomes inefficient.
Why not the others?
B. Azure Data Studio - Great for interactive use and managing individual databases, but not designed for bulk assessments.
C. Azure portal and Azure Storage Explorer - These tools provide UI-based access to services and storage, not bulk assessment or recommendations.
What factors are considered when calculating the monthly estimated cost for Azure SQL Database using the Azure SQL Assessment tool?
A. Database size, compute size and storage size
B. Database size, compute size and database activity
C. Compute size, storage size and database activity
(Assess SQL Server databases for migration to Azure SQL)
C. Compute size, storage size, and database activity
The Azure SQL Assessment tool (like the Data Migration Assistant or Azure Migrate) estimates monthly cost based on:
Key Cost Factors:
Compute Size - Number of vCores or DTUs needed, depending on the service tier. Driven by performance requirements (CPU, memory, IOPS).
Storage Size - Total storage provisioned for data and backups.
Includes both data and log files.
Database Activity - Read/write patterns, connection load, and workload characteristics. Affects the choice of performance tier (e.g., General Purpose vs Business Critical). Influences whether serverless, provisioned, or hyperscale is more cost-efficient.
Why not the others?
A. Database size, compute size and storage size - Database size overlaps with storage size, and doesn’t factor in workload, which affects compute needs.
B. Database size, compute size and database activity - Missing storage size, which directly affects cost — especially in Hyperscale or Managed Instances.
Summary:
To calculate cost accurately, Azure SQL Assessment tools consider: Compute size + Storage size + Database activity
This ensures a recommendation that aligns with both performance needs and budget.
What ae the five main stages of data platfom modenization?
A. Investigate, analyze, build, deploy and monitor
B. Discover, assess, plan, transfom and validate
C. Design, code, test, deploy and maintain
(Design a SQL Server Migration Stategy)
B. Discover, assess, plan, transform, and validate
When modernizing a data platform — especially moving from on-premises to cloud (like Azure SQL) — the process typically follows these five key stages:
- Discover - Inventory all data assets (databases, workloads, dependencies). Use tools like Azure Migrate, Data Migration Assistant, etc.
- Assess - Evaluate compatibility and identify migration blockers.
Analyze costs, performance needs, and business impact. - Plan - Define the migration strategy (lift-and-shift, re-platform, or re-architect). Choose the right Azure SQL offering (SQL DB, Managed Instance, etc.). Establish timelines and resource requirements.
- Transform - Migrate and/or refactor data and applications as needed. Implement necessary schema or code changes. Optimize for the cloud (e.g., scaling, security, HA setup).
- Validate - Test for performance, data integrity, and functionality.
Ensure successful cutover, user acceptance, and ongoing monitoring.
Why not the others?
A. Investigate, analyze, build, deploy and monitor - Sounds like a software development cycle, not a structured modernization process.
C. Design, code, test, deploy and maintain - Applies more to application development, not data platform migration.
Summary:
For data platform modernization, follow the structured journey of: Discover → Assess → Plan → Transform → Validate
This ensures a smooth, secure, and efficient transition to the cloud.
What is the purpose of the “Initiate and Discover” stage in a data platform modernization project?
A. To complete a current workload assessment
B. To determine the target platform for the migration
C. To evaluate the current environment and identify any dependencies between applications and databses
(Design a SQL Server Migration Stategy)
C. To evaluate the current environment and identify any dependencies between applications and databases
The “Initiate and Discover” stage is the very first step in a data platform modernization project. Its main purpose is to:
Understand what you have — your current data landscape — and uncover dependencies that could impact the migration.
This stage involves:
Inventorying existing databases, applications, and infrastructure
Identifying interconnected systems and data flows
Highlighting any potential blockers or risks
Preparing for the assessment and planning phases
Why not the others?
A. To complete a current workload assessment - This happens in the Assess stage, after discovering what you have.
B. To determine the target platform for the migration - This is part of the Plan stage, which comes after discovery and assessment.
Summary:
In the Initiate and Discover phase, the goal is to evaluate the current environment and identify dependencies — so that you can build a solid foundation for a smooth and well-informed migration.
Which SQL Server PAAS offeing should you choose if you want to restore an on-premise database backup to Azure?
A. Azure SQL Managed Instance
B. Azure SQL Database
C. Both Azure SQL Database and Azure SQL Managed Instance
(Design a SQL Server Migration Stategy)
A. Azure SQL Managed Instance
If you want to restore an on-premises SQL Server database backup (.bak file) directly to Azure, the only PaaS offering that supports this is:
Azure SQL Managed Instance
Why?
Supports native SQL Server backup and restore operations.
You can restore a .bak file from on-prem via Azure Blob Storage.
Offers near-full compatibility with on-prem SQL Server features — great for lift-and-shift migrations.
Why not the others?
B. Azure SQL Database - Does not support direct .bak file restores.
You’d need to use tools like Data Migration Assistant (DMA) or Azure Database Migration Service (DMS) to import schema and data, not just restore backups.
C. Both Azure SQL Database and Azure SQL Managed Instance -
Incorrect — only Managed Instance allows native backup file restoration.
Summary:
To directly restore an on-premises SQL Server database backup (.bak) to Azure, use Azure SQL Managed Instance.
What are some scenarios where SQL Server on Azure Virtual Machine ia a better option than Azure SQL Managed Instance?
A. When you need direct access to the opersting system or the file system, such as installing non-Microsft or custom agents on the same virtual machine alongside SQL Server.
B. When you heavily reply on festures supported by Azure SQL Managed Instance such as FileStream/FileTable, Polybase and cross-instance transactions.
C. When your business requires staying at a particular version of Azure SQL Managed Instance
(Migrate SQL Server workloads to Azure Virtual Machine)
A. When you need direct access to the operating system or the file system, such as installing non-Microsoft or custom agents on the same virtual machine alongside SQL Server.
SQL Server on Azure Virtual Machine (VM) is an Infrastructure-as-a-Service (IaaS) offering, meaning you get full control over the OS and SQL Server instance. This makes it a better fit in scenarios like:
Ideal Scenarios for SQL Server on Azure VM:
You need to install custom software, third-party tools, or non-Microsoft agents on the same machine.
You require full OS-level access, including:
Custom scripts and job schedulers.
Special configurations for security or networking.
You want control over patching and SQL Server version upgrades.
Why not the others?
B. When you heavily rely on features supported by Azure SQL Managed Instance such as FileStream/FileTable, Polybase and cross-instance transactions.
This suggests Azure SQL Managed Instance is the right fit, not SQL Server on Azure VM.
C. When your business requires staying at a particular version of Azure SQL Managed Instance
Azure SQL Managed Instance is a PaaS offering and doesn’t support staying on an exact version long-term — it auto-updates over time.
If version control is critical, SQL Server on Azure VM is more appropriate, but the reasoning in this choice is incorrect.
Summary:
Choose SQL Server on Azure VM when you need full control, including the ability to install custom software or access the underlying OS and file system.
What does the ‘how’ aspect of SQL Server Migration planning encompass?
A. Recognizing the benefits after migration is completed
B. The evaluation of the amount of downtime that the organization is willing to tolerate
C. To create a Windows Server Failover Cluster on the source and target.
(Migrate SQL Server workloads to Azure Virtual Machine)
B. The evaluation of the amount of downtime that the organization is willing to tolerate
The “how” aspect of SQL Server migration planning focuses on execution strategy — specifically, how the migration will be performed. A critical part of this is understanding:
Downtime tolerance — how much interruption the business can afford during migration.
This determines whether you’ll use:
Online migration (minimal to no downtime, more complex)
Offline migration (simple, but involves planned downtime)
Staged migration or replication techniques
It also influences which migration tools and services (e.g., Azure Database Migration Service, Log Shipping, or Replication) you might choose.
Why not the others?
A. Recognizing the benefits after migration is completed - That’s part of the “why” of migration, not how.
C. To create a Windows Server Failover Cluster on the source and target - That’s a specific high availability configuration, not a general part of migration planning.
Summary:
The “how” in SQL Server migration planning involves defining how the migration will occur, with a key focus on downtime tolerance, which drives the migration method and tooling decisions.
What is the purpose of using SQL Server Always On Availability Group (AG) feature during migration?
A. To create an availability group listener on the source and target
B. To minimize downtime during migration by extending an on-premise database into the cloud.
C. To create a Windows Server Failover Cluster on the source and target
(Migrate SQL Server workloads to Azure Virtual Machine)
B. To minimize downtime during migration by extending an on-premises database into the cloud.
The SQL Server Always On Availability Group (AG) feature is often used during migration to:
Extend the on-premises environment to Azure and reduce downtime.
How it works:
You set up an Availability Group that includes both the on-premises SQL Server and the Azure-based SQL Server.
Real-time replication/synchronization of data occurs.
When ready, you perform a planned failover to Azure with minimal or no downtime.
Ideal for business-critical applications where even short outages aren’t acceptable.
Why not the others?
A. To create an availability group listener on the source and target -
The listener is a component of AGs used for routing connections, but it’s not the main purpose of using AGs during migration.
C. To create a Windows Server Failover Cluster on the source and target - While AGs use WSFC for high availability, this option focuses on the infrastructure, not the purpose during migration.
Summary:
The purpose of using Always On AG during migration is to extend your database into Azure while keeping it in sync, enabling a seamless cutover with minimal downtime.
What are the advantages of backing up to block blob instead of page blob in Azure Blob Storage for SQL Server 2016 and later versions?
A. Block blob offers safer authorization, improved backup performance and support for larger database backups compared to page blob.
B. Page blob offers safer authorization compared to block blob
C. Block blobs offers better suppot for smaller database backups compared to page blob
(Migrate SQL Server workloads to Azure Virtual Machine)
A. Block blob offers safer authorization, improved backup performance, and support for larger database backups compared to page blob.
Starting with SQL Server 2016, Microsoft introduced native support for backing up directly to Azure block blobs, which offers several advantages over using page blobs, especially for cloud-first or hybrid backup strategies.
Advantages of Block Blobs:
Safer Authorization
Uses Shared Access Signatures (SAS), which are more secure and flexible than the traditional storage account keys used by page blobs.
Improved Backup Performance
Block blobs are optimized for sequential write operations, which align better with how backups are written, resulting in faster backup throughput.
Larger Backup Support
Each block blob can store up to 190.7 TiB (with chunked uploads), which is significantly more than the 1 TiB limit of a page blob.
Why not the others?
B. Page blob offers safer authorization compared to block blob -
Incorrect — Block blob with SAS offers more secure and manageable access.
Block blobs offer better support for smaller database backups compared to page blob - Not the key advantage — the focus is on performance, security, and scale, not just “small” backups.
Summary:
When backing up SQL Server 2016+ to Azure, block blobs are preferred due to better security, performance, and scalability over page blobs.
What is the purpose of entering credentials in the self-hosted integration runtime?
A. To connect to the source and migrate data to the target
B. To delete the source data
C. To create a backup of the source data
(Migrate SQL Server workloads to Azure SQL Database)
A. To connect to the source and migrate data to the target
The self-hosted integration runtime (SHIR) is used in services like Azure Data Factory or Azure Synapse when you need to access on-premises data sources (e.g., SQL Server, Oracle, file systems) from the cloud.
Why credentials are required:
To authenticate and connect to the source system (e.g., on-prem SQL Server).
So that the integration runtime can read data from the source and write it to the target (like Azure SQL Database or Blob Storage).
It’s part of the secure data movement process in hybrid data integration scenarios.
Why not the others?
B. To delete the source data
The integration runtime doesn’t automatically delete source data unless explicitly configured — that’s not its main purpose.
C. To create a backup of the source data
While it can copy data, it doesn’t perform native database backups — that’s handled by SQL Server or backup tools.
Summary:
You enter credentials in the self-hosted integration runtime so it can securely connect to your source environment and migrate or copy data to your target system.
What is the role of self-hosted integrated runtime in data migration?
A. To connect to the source and target and facilitate data migration
B. To store the migrated data
C. To analyze the migrated data
(Migrate SQL Server workloads to Azure SQL Database)
A. To connect to the source and target and facilitate data migration
The Self-hosted Integration Runtime (SHIR) plays a crucial role in hybrid data migration scenarios, especially when dealing with on-premises data sources. It’s used with tools like Azure Data Factory or Azure Synapse Analytics.
Role of SHIR:
Connects to on-premises or non-Azure data sources (like SQL Server, Oracle, file shares, etc.)
Transfers data securely to cloud targets such as Azure SQL Database, Blob Storage, or Data Lake
Acts as a bridge between your on-prem environment and Azure
Why not the others?
B. To store the migrated data - SHIR doesn’t store data. It moves it from source to target — storage is handled by services like Azure SQL or Blob Storage.
C. To analyze the migrated data - Data analysis is done by tools like Power BI, Azure Synapse, or Azure Machine Learning, not the SHIR.
Summary:
The Self-hosted Integration Runtime is responsible for connecting to both the source and the target systems and facilitating the data migration process — especially in hybrid cloud environments.
What is the significance of selecting tables during data migration?
A. It helps in migrating only the required data from the source to the target
B. It helps in creating a backup of the source data
C. It helps in analyzing the migrateed data
(Migrate SQL Server workloads to Azure SQL Database)
A. It helps in migrating only the required data from the source to the target
When planning a data migration, selecting specific tables allows you to:
Migrate only what you need
Avoid unnecessary data bloat in the target system.
Focus on critical tables relevant to current applications or workloads.
Improve performance and efficiency of the migration process.
Reduce migration time and cost.
This is especially useful when:
Migrating from a legacy system with a large, cluttered schema.
Performing a phased migration.
Complying with data governance or compliance policies.
Why not the others?
B. It helps in creating a backup of the source data - Data migration is different from backup. Backups are full or incremental copies of the database for recovery purposes.
C. It helps in analyzing the migrated data - Analysis is typically done after migration using analytics tools, not during table selection.
Summary:
Selecting tables during migration ensures that only the necessary data is transferred, making the process faster, cheaper, and more efficient.