whizlabs test Flashcards

1
Q

Q 25. Which of the following data types are supported in Snowflake for semi-structured data? (select all that apply)
A. VARIANT
B. OBJECT
C. TEXT
D. CLOB
E. ARRAY

A

Correct​ ​Answer​s:​ A, B, and E

Snowflake supports three data types for processing semi-structured data.
VARIANT – The basic (and most used) data type for storing semi-structured data.
OBJECT – Used to store key-value pairs, where the key is a non-empty string, and value is semi-structured data in VARIANT data type.
ARRAY – Used to store arrays of semi-structured data where array index is non-negative integer and value is semi-structured data in VARIANT data type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Q33 : Following are the two modes in which you can run a multi-cluster virtual warehouse in Snowflake.
A. Dynamic
B. Maximized
C. Auto-Scale
D. Multi-threaded

A

Correct​ ​Answer​s:​ B and C

Maximized or auto-scale

A multi-cluster virtual warehouse consists of a group of compute clusters provisioned either based on the user/query load. There are two modes in which you can create a multi-cluster warehouse. These modes govern the way compute Snowflake provisions clusters.

Maximized – In this mode, Snowflake starts all compute clusters within the warehouse when the warehouse is started. This mode is enabled by setting the same value for Min. and Max. clusters
Auto-Scale – In this mode, Snowflake starts the warehouse with Min. number of clusters and spins up new clusters incrementally and automatically as and when the user/query load increases. The maximum number of clusters Snowflake can add is governed by the value given in Max. clusters. This mode is enabled by setting different values for Min. and Max. clusters.
➽ Exam Tip: To provide further control over the way the ‘Auto-scale’ mode operates, Snowflake provides a property called ‘SCALING_POLICY’. This property can have two values: 1) Standard 2) Economy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Q34 : The following object in Snowflake is also known as the ‘data dictionary’ of a database.
A. PUBLIC schema
B. Data Catalog
C. INFORMATION_SCHEMA schema
D. None of the above

A

INFORMATION_SCHEMA

When you create a database in Snowflake, two schemas get automatically created within the database. 1. INFORMATION_SCHEMA schema and 2. PUBLIC schema.

INFORMATION_SCHEMA is a read-only schema. It contains only views. The views have all meta-data information about the database objects e.g., all tables of that database, all columns, all stages, all sequences etc. These views can be queried to retrieve meta-data information about the database. Hence, the INFORMATION_SCHEMA schema is also known as the data dictionary of a database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Q36 : Snowflake provides the following types of internal stages to load data. (select all that apply)
A. Table Stage
B. Data Stage
C. User Stage
D. Named Stage

A

A stage is essentially a location in the cloud where you will place your data to load in Snowflake. A stage can be internal (i.e., within Snowflake) or external (outside Snowflake in supported cloud environments i.e., AWS, Azure or GCP). Snowflake provides 3 types of internal stages to load data: table stage, user stage, named stage. A comparison of these stages and corresponding features are given below. You may get a variety of questions in the exam about the features of each stage.

Table stage User Stage Named stage
Purpose This is a table-specific stage. Automatically available for every Snowflake table. Can copy data into that specific table only. Can be used by any user. This is a user-specific stage. Automatically available for every user. Can copy data into any table to which the user has access. One user cannot access another user’s stage. Most flexible of the three. Can copy data into any table and can be used by any user having the privileges to access that stage.
Name Has the same name as the table name. Has the same name as the user name first-class database objects and can have any user-defined name
Referenced as @%<stage-name> @~<stage-name> @<stage-name>
Data transformations Not supported Not supported Supported
Web UI Cannot be seen in the Stages tab of Web UI Cannot be seen in the Stages tab of Web UI Can be seen in the Stages tab of Web UI</stage-name></stage-name></stage-name>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Q37 : While loading data into Snowflake, which of the following transformations are supported? (select all that apply)
A. Column reordering
B. Column omission
C. Derived columns
D. Cast operation
E. Truncation
F. Transpose

A

Correct​ ​Answer​s: A, B, D and E

Explanation

Snowflake supports 4 types of simple transformations while doing data loading.

The supported transformations are – reordering of columns, omission of columns, data type conversion (also called casting or typecasting) and truncating text strings that exceed the length in the target column. Derived columns and transpose or any other complex data transformation operations are not supported while data loading.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q39 : From the statements given below, select the statements which are TRUE for reader accounts. (select all that apply)
A. A reader account facilitates data sharing with consumers that do not have a Snowflake account
B. One reader account can be used to access multiple shares provided by different provider accounts
C. A reader account consumes credit of the provider account
D. A reader account has the same access to Snowflake support as the provider account

A

Correct​ ​Answers:​ A and C

Explanation

Answer choice-A is correct. If consumers of your share do not have a Snowflake account (e.g., a consumer using other database products such as Oracle or PostgreSQL), Snowflake provides a functionality called ‘reader account’ which can be used to access the share. A reader is a special Snowflake account created by the provider of the share for the sole purpose of providing access to the share to a consumer. The provider fully manages the reader accounts.
Answer choice-B is incorrect. Each reader account belongs to the provider account that created it. This means that if you need to access multiple shares from different providers, you will need to have multiple reader accounts – one reader account per provider.
Answer choice-C is correct. A reader account will require a warehouse to execute queries on the data shared with it. As the reader account is not a licensed user of Snowflake, the warehouse needs to be provisioned by the provider account. Consequently, the provider will be charged for the credits consumed by the queries executed by the reader accounts it has created.
Answer choice-D is incorrect. The reader account does not have access to Snowflake support. The provider that creates and manages the reader account will be able to route questions to Snowflake support on behalf of the reader account.

➤ Practical Info: Technically, a reader account can consume an unlimited number of credits, which will be charged to the provider account. Therefore, to monitor and control the credit consumption by the reader accounts, the provider account should create resource monitors.

Further Reading: Managing reader accounts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Q41 : Select the statements that apply to clustering depth in a micro-partition. (select all that apply)

A. Clustering depth is the average depth of overlapping micro-partitions for a specified set of columns
B. The lower the clustering depth, the better clustered the table is
C. The higher the clustering depth, the better clustered the table is
D. Is an indication of whether the table needs an explicit clustering key

A

Correct​ ​Answer​s:​ A, B and D

Explanation

Answer choice-A is correct. Clustering depth is the average depth of overlapping micro-partitions. Clustering depth indicates how well the data is dispersed across micro-partitions.
Answer choice-B is correct and Answer choice-C is incorrect. The ideal clustering depth is 1 and it indicates that no micro-partitions overlap for a given set of columns. Therefore, when a search query is executed, Snowflake can directly find the right micro-partition. Any number greater than 1 indicates that when a search query is executed, Snowflake will find several micro-partitions and it will need to process these micro-partitions to prepare the resultset. For an empty table, the clustering depth is 0.
Answer choice-D is correct. In general, Snowflake organizes your data in micro-partitions in such a way that clustering depth is low. However, over time, as insert/updates are performed on the data, the data may no longer stay clustered optimally and clustering depth becomes large. This is specifically true for very large tables. When this happens, the search query performance becomes slower over time.

To improve the clustering of the data in micro-partitions, Snowflake allows you to designate one or more table columns/expressions as an explicit clustering key for the table. A clustering key co-locates rows in the same micro-partitions thereby improving search efficiency. Hence, high clustering depth indicates that the queries on that table may benefit by defining an explicit clustering key.

Further Reading: Clustering depth illustration – Snowflake documentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Q42 : Select statement(s) which are TRUE from the given set of statements concerning Snowflake virtual warehouses. (select all that apply)
A. Scaling up a virtual warehouse is an automatic process
B. Scaling up a virtual warehouse is a manual process
C. Scaling down a virtual warehouse is an automatic process
D. Scaling down a virtual warehouse is a manual process
E. Scaling out a virtual warehouse is an automatic process
F. Scaling out a virtual warehouse is a manual process
G. Scaling back a virtual warehouse is an automatic process
H. Scaling back a virtual warehouse is a manual process

A

Scaling model How? For?
Scale-Up/Down Manual Query/ Data Loading Performance
Scale-Out/In Automatic Query concurrency/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Q43 : In Snowflake, For each securable object, there is a set of privileges. The privileges provide fine-grained access control on the object. Each securable object has an owner that can assign these privileges directly to a user or a group of users.

A

Correct​ ​Answer​: B

The above statement is false. This is because the owner of a securable object CANNOT grant privileges over the object directly to a user. The privileges must be assigned to roles. The roles can be granted to other roles creating a role hierarchy OR the role can be granted to a user or a group of users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Q45 : Zero-copy cloning operation is supported for the following objects in Snowflake. (select all that apply)
A. A temporary Table
B. A transient Table
C. An external table
D. A permanent Table
E. A database created from a Share
F. Internal (named) Stage

A

Correct​ ​Answers: B and D

Explanation

Zero-copy cloning is available only for the permanent and transient table types. It is not available for temporary and external table types.
You cannot clone a database that is created from a Share.
Also, zero-copy cloning is not available for stage objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Q46 : A resource monitor can be created by…
A. Owner of the virtual warehouse
B. ACCOUNTADMIN role
C. SYSADMIN role
D. USERADMIN role
E. Any of the above

A

Resource monitors can only be created by account administrators (i.e. users with the ACCOUNTADMIN role). However, account administrators can choose to grant MONITOR and MODIFY privileges on resource monitors to the users with other roles as needed.

Further Reading: Access Control Privileges for Resource Monitors – Snowflake Documentation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Q47 : You have a business-critical edition of Snowflake on AWS cloud. The data lake implementation of your organization is in Azure and utilizes Azure Data Lake Gen2 service. As your Snowflake account and your data lake are on different clouds, it is not possible to do bulk loading from Azure data lake into Snowflake using the COPY command.
A. True
B. False

A

This statement is not correct. Bulk loading using COPY INTO supports data loading into Snowflake from the data files on your local file system or in cloud storage external to Snowflake (Amazon S3, Microsoft Azure, or Google Cloud Storage) irrespective of where your Snowflake account is hosted.

Further Reading: Introduction to Data Loading – Youtube Video

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Q49 : Which of the following statements are true about multi-cluster Warehouses in Snowflake? (Select all that apply).
A. Multi-cluster warehouses support all properties of a single-cluster warehouse
B. Increasing the number of compute resources in a multi-cluster warehouse is an example of scaling up
C. A single cluster warehouse can be reconfigured to turn into a multi-cluster warehouse but vice-versa is not true
D. None of the above

A

Answer choice-A is correct. Multi-cluster warehouses support all the same properties and actions as single warehouses, including:

Specifying warehouse size ( e.g. XS, S, M, L…)
Resizing a warehouse at any time.
Auto-suspending a running warehouse due to inactivity;
Auto-resuming a suspended warehouse when new queries are submitted.
Answer choice-B is incorrect. Increasing the number of compute resources in a multi-cluster warehouse is an example of scaling out. Increasing the size of compute clusters is an example of scaling up.
Answer choice-C is incorrect. A single cluster warehouse can be reconfigured to turn into a multi-cluster warehouse. To do this, you will have to set the max_cluster_count to a value greater than 1 as shown in the example below:

ALTER WAREHOUSE “COMPUTE_WH” SET MAX_CLUSTER_COUNT = 3;

Similarly, a multi-cluster warehouse can be reconfigured to turn into a single cluster warehouse. To do this, you will have to set the max_cluster_count to a value equal to 1 as shown in the example below:

ALTER WAREHOUSE “COMPUTE_WH” SET MAX_CLUSTER_COUNT = 1;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly