SnowSql Commands Flashcards
(43 cards)
How do you set your context in Snowflake to a specific role, database, and schema?
To set your context in Snowflake to a specific role, database, and schema, you can use the following SQL commands:
* Set a Specific Role: USE ROLE FOUND_ROLE;
* Set a Specific Database: USE DATABASE SNOWBEARITR_DB;
* Set a Specific Schema: USE SCHEMA MODELED;
How do you create a virtual warehouse in Snowflake?
CREATE OR REPLACE WAREHOUSE FOUND_LEARNER_wh WITH
WAREHOUSE_SIZE = XSmall
INITIALLY_SUSPENDED = true;
This SQL command performs the following actions:
* CREATE OR REPLACE WAREHOUSE: This part of the statement ensures that if a warehouse named FOUND_LEARN_WH already exists, it will be replaced. If it doesn’t exist, a new one will be created.
* WAREHOUSE_SIZE = ‘XSmall’: Sets the size of the warehouse to ‘XSmall’, which determines the compute resources allocated to the warehouse.
* INITIALLY_SUSPENDED = true: Specifies that the warehouse should be created in a suspended state. This means it won’t start consuming credits until you explicitly start it.
This command creates a virtual warehouse named FOUND_LEARNER_wh with a size of XSmall and sets it to be initially suspended.
Changing Roles in Snowflake
How do you change your role in Snowflake?
How do you revert to a previously assigned role in Snowflake?
How do you select a virtual warehouse to use for your Snowflake session?
To change your role and select a virtual warehouse for your Snowflake session, you can use the following SQL commands:
1. Change Your Role: USE ROLE {role_name};
2. Replace {role_name} with the name of the role you want to switch to: USE ROLE ANALYST;
3. Select a Virtual Warehouse: USE WAREHOUSE {warehouseName};
4. Replace {warehouseName} with the name of the warehouse you want to use: USE WAREHOUSE COMPUTE_WH;
These commands are key for navigating and manipulating roles and warehouses in Snowflake, ensuring you can manage your resources and permissions effectively.
Showing Tables in Your Current Context
How can you display the tables available in your current Snowflake context?
How do you display tables in your current Snowflake context and what does this command not utilize?
This command lists all the tables that are accessible within the current database and schema that your session is set to. Here’s a breakdown of how it works: SHOW TABLES;
* This command is a metadata operation, meaning it retrieves information about table structures without needing to perform any data processing tasks.
* As such, it does not require the use of your virtual warehouse’s compute resources to execute. Therefore, it will not consume any compute credits, making it cost-effective for administrative or exploratory tasks.
This is crucial for managing and exploring database objects without incurring extra compute costs.
Basic SELECT Query
How do you perform a basic SELECT query on a table in Snowflake?
To query a table in Snowflake, use the SELECT statement followed by the columns you want to retrieve or an asterisk (*) for all columns, then specify the table. For example:
SELECT * FROM members LIMIT 10;
This command retrieves the first 10 rows from the members table.
Specifying a Full Path in Queries
When do you need to specify a full path in a SELECT query in Snowflake?
You need to provide the full path (database.schema.table) in your SELECT statement when the table you want to query is not in the default schema of your current context. For example:
SELECT TOP 5 c.c_lastname, c.c_firstname, c.c_acctbal
FROM snowbearair_db.promo_catalog_sales.customer c
ORDER BY c.c_acctbal DESC;
This query specifies the full path to the customer table, which is in a different schema than the current context.
JOIN Operation in Queries
How do you perform a JOIN operation between tables in Snowflake?
To join tables in Snowflake, use the JOIN keyword and specify the condition for the join. For example:
SELECT c_firstname, c_lastname, o_orderkey, o_totalprice
FROM promo_catalog_sales.orders
JOIN promo_catalog_sales.customer ON o_custkey = c_custkey
ORDER BY o_totalprice DESC LIMIT 10;
This query joins the orders and customer tables on a common key and orders the results by o_totalprice.
Creating a Database
How do you create a database in Snowflake?
To create a database in Snowflake, use the CREATE OR REPLACE DATABASE statement:
CREATE OR REPLACE DATABASE FOUND_LEARNER_db;
This command creates a new database named FOUND_LEARNER_db or replaces it if it already exists.
Creating a Schema
How do you create a schema in Snowflake?
To create a schema within a database in Snowflake, execute:
CREATE OR REPLACE SCHEMA FOUND_LEARNER_db.my_schema;
This creates a schema named my_schema within the FOUND_LEARNER_db database.
Creating a Temporary Table
How do you create a temporary table in Snowflake?
To create a temporary table in Snowflake, use:
CREATE OR REPLACE TEMPORARY TABLE my_favorite_actors (name VARCHAR);
This creates a temporary table named my_favorite_actors that will exist for the duration of the session.
Inserting Data into a Table
How do you insert data into a table in Snowflake?
To insert data into a table, use the INSERT INTO statement:
INSERT INTO my_favorite_actors VALUES
(‘Heath Ledger’),
(‘Michelle Pfeiffer’),
(‘Meryl Streep’),
(‘Anthony Hopkins’),
(‘Bruce Lee’);
This adds a list of actor names into the my_favorite_actors table.
Suspending a Virtual Warehouse
How do you suspend a virtual warehouse in Snowflake?
To suspend a virtual warehouse, use the ALTER WAREHOUSE command followed by the SUSPEND keyword:
ALTER WAREHOUSE FOUND_LEARNER_wh SUSPEND;
This will stop the virtual warehouse, ceasing its compute resources and saving credits. The dot next to your virtual warehouse name will turn gray, indicating it is no longer running.
Setting User Defaults
How do you set user defaults for your role, warehouse, and namespace in Snowflake?
To set your default role, warehouse, and namespace, use the ALTER USER command:
ALTER USER FOUND_LEARNER SET DEFAULT_ROLE = FOUND_ROLE,
DEFAULT_WAREHOUSE = FOUND_LEARNER_wh,
DEFAULT_NAMESPACE = FOUND_LEARNER_db.my_schema;
These settings ensure that every time you start a new session or open a new worksheet, Snowflake will automatically use these defaults, streamlining your workflow.
Generic SQL Statement for UNDROP
If you accidentally drop a table named customer_data, you can recover it using the following statement, provided you’re within the Time Travel retention period:
UNDROP TABLE customer_data;
Generic SQL Statement for CLONE with Time Travel
Research further on this
- To create a clone of a table as it existed at a specific point in the past, you might use.
- Alternatively, to clone an object from just before a specific change:
CREATE OR REPLACE TABLE customer_data_clone
CLONE customer_data
AT (OFFSET => -3600); – Clones the table as it was an hour ago
=========================================================================================
CREATE OR REPLACE TABLE customer_data_clone
CLONE customer_data
BEFORE (STATEMENT => ‘query_id’); – Replace ‘query_id’ with the ID of the query that modified the table
Generic SQL Statement for SELECT with Time Travel
- To query historical data from a specific time in the past, you might use.
- Or to query data as it was before a specific query ran.
SELECT *
FROM customer_data
AT (TIMESTAMP => ‘2024-04-12 08:30:00.000’::timestamp); – Retrieves data as it was on April 12, 2024, at 8:30 AM
=====================================================================================
SELECT *
FROM customer_data
BEFORE (STATEMENT => ‘query_id’); – Replace ‘query_id’ with the ID of the query you want to travel back to before it was executed
Restoring Objects with UNDROP
- How can you restore dropped objects using Snowflake’s Time Travel?
- What types of objects can be restored using the UNDROP command?
Understand the capabilities and limitations of the UNDROP command.
UNDROP is a lifeline for recovering dropped database objects within the Time Travel retention period.
How to Restore Dropped Objects
* Restoring a Table: This will restore the specified table as it was just before it was dropped.
UNDROP TABLE table_name;
-
Restoring a Schema: This will restore the specified schema and all its contents, including tables and views.
UNDROP SCHEMA schema_name; -
Restoring a Database: This will restore the entire database along with all its schemas and contained objects.
UNDROP DATABASE database_name;
Types of Objects That Can Be Restored: Tables, Schemas, Databases, Limitations
- The UNDROP command must be used within the Time Travel retention period specific to your Snowflake edition.
- The command can only restore objects to their state just before they were dropped, not to any other historical state.
Recovering Historical Objects with Cloning
- How is cloning used in conjunction with Time Travel in Snowflake to recover data?
- What scenarios are best suited for using cloning to recover historical data?
Cloning combined with Time Travel doesn’t duplicate data but leverages metadata to recreate objects from a specific point in time.
Cloning is an efficient way to manage data evolution and backup in Snowflake without the need for extensive storage space.
Cloning with Time Travel in Snowflake is used to recover historical objects or create exact replicas of data from a specific point in the past. This is done using the CLONE command alongside a TIMESTAMP or a STATEMENT to specify when the clone should be from.
* Analogy: Think of cloning in Snowflake like creating a parallel universe where everything is the same as a particular moment in time, but actions in one universe don’t affect the other.
CREATE TABLE restored_table CLONE my_table1 AT (TIMESTAMP => ‘Mon, 09 May 2020 01:01:00’::timestamp);
or
CREATE DATABASE restored_db CLONE my_db BEFORE (STATEMENT => ‘8e5d0ca9-005e-44e6-b858-a8f5b37c5726’);
//////////////////////////////////////////////////////////
Developers needing to test new features can clone the current production database to a test environment without impacting the live data, effectively maintaining a parallel environment for safe testing.
Using Time Travel to View Data Before Deletion
- How can you view the state of data before executing a DELETE operation in Snowflake?
- What Time Travel capability allows you to review table contents before changes?
Emphasize the usage of the BEFORE clause in conjunction with a query ID.
Time Travel acts as an invaluable tool for data verification before critical operations.
In Snowflake, you can view the state of data before a DELETE operation by using the BEFORE clause along with the query ID from the delete action. For instance:
SELECT * FROM region BEFORE (STATEMENT => ‘query_id’);
Replace ‘query_id’ with the ID captured after the DELETE command. This feature is particularly useful when you need to ensure the correct data was removed or to recover data if deleted erroneously.
- Analogy: It’s like looking at a photograph of a room taken just before rearranging the furniture, allowing you to remember where everything was originally placed.
Before performing a data cleanup task, an analyst reviews the data set that will be impacted by the deletion to confirm the accuracy of the operation.
Restoring Data After Accidental Deletion
- How do you restore data after an accidental deletion in Snowflake?
- What steps are involved in reverting a table to its pre-deletion state?
Explain the process of capturing query IDs and using them to restore data.
The safeguard provided by Time Travel in Snowflake ensures data resilience and operational confidence.
If data is accidentally deleted, Snowflake allows you to restore it by using the BEFORE clause with the query ID of the deletion:
SELECT * FROM region BEFORE (STATEMENT => ‘query_id’);
After identifying the query ID with SET variable_name = LAST_QUERY_ID(); following the deletion, you can retrieve the exact state of the table from before the delete command was executed.
Real-World Use Case: A database administrator accidentally deletes a critical region from a table and uses the stored query ID to restore the table to its state before the deletion, thereby recovering the lost data.
This is similar to using a time machine to go back to the moment before a precious vase was broken, allowing you to prevent the accident from happening.
Restoring Tables with Time Travel
How do you restore a previously dropped table in Snowflake?
Deeper Question: What is the process for reverting a table to a state before changes were applied?
Note the limitations of the UNDROP command.
Time Travel simplifies data recovery by providing a straightforward way to revert unintended changes.
To restore a dropped table, you use the UNDROP TABLE command. However, you can only undrop the most recent version of the table. If you need to restore to a version before the most recent change, you must use Time Travel with cloning:
UNDROP TABLE region;
CREATE TABLE restored_region CLONE region BEFORE (timestamp => ‘specific_timestamp’);
Replace ‘specific_timestamp’ with the time just before the changes occurred. This process allows you to recover the table as it was before any alterations, including the accidental drop.
- Analogy: Undoing the drop of a table is like using a video editor’s “undo” function to reverse the last edit, returning the content to its pre-edit state.
After a table is accidentally dropped during a schema update, the admin uses UNDROP to restore it, ensuring the database maintains its integrity and availability.
Correcting Data Revisions with Cloning
- How can you correct data revisions using cloning in Snowflake?
- What steps should be taken when a direct clone is not possible due to a dropped table?
Describe the strategy for cloning after a table has been dropped.
Cloning in Snowflake enables a historical perspective, allowing restoration of data as needed.
If you try to clone a dropped table and encounter an error because the table does not exist, you first need to use UNDROP to restore the most recent version of the table and then clone it from a point before the initial changes:
- UNDROP TABLE region;
- CREATE TABLE restored_region CLONE region BEFORE (timestamp => ‘specific_timestamp’);
- SELECT * FROM restored_region;
The ‘specific_timestamp’ should be set to a point before the changes you want to reverse. This lets you effectively manage and repair unintended data revisions.
- Analogy: It’s akin to restoring a document from a previous version in a version control system after realizing the latest changes have introduced errors.
When an update causes erroneous data changes, the team undrops the table and clones it from a prior state, ensuring the data is correct and up to date.
Comparing Tables in Snowflake
- How can you compare a restored table with its original state in Snowflake?
- What SQL techniques can be used for data comparison pre and post Time Travel operations?
Consider the implications of table comparisons for data integrity checks.
Comparing data sets is crucial for validating the accuracy of Time Travel restorations.
In Snowflake, you can compare the data of a restored table with its original state by using a MINUS operation or a JOIN. For the MINUS operation, use:
SELECT * FROM restored_region
MINUS
SELECT * FROM region AT (timestamp => $dev_before_changes);
Alternatively, for a full comparison, use a FULL JOIN:
SELECT
o.r_regionkey AS original_key,
o.r_name AS original_name,
n.r_regionkey AS new_key,
n.r_name AS new_name
FROM
restored_region n
FULL JOIN
region AT (timestamp => $dev_before_changes) o
ON
o.r_regionkey = n.r_regionkey
ORDER BY original_key, new_key;
This will output a side-by-side comparison of the original and the restored data, highlighting any discrepancies.
- Analogy: Think of this like proofreading a document’s revised copy against the original draft to ensure all edits are correct and no unintended changes were made.
After performing a data recovery process, a data analyst needs to ensure that the restored table matches the original data. Using these SQL operations, they can confirm that the restoration was successful and that the data integrity is intact.
Loading Data from Local Storage
- How do you load data from local storage into Snowflake?
- What are the steps to ingest data from a local file system into Snowflake?
Illustrate the workflow from creating a stage to data ingestion.
The local data loading process in Snowflake is straightforward, yet requires understanding of the staging and loading commands.
To load data from a local file system into Snowflake:
1. Create an Internal Stage: Set up a space in Snowflake to temporarily store the files.CREATE STAGE my_internal_stage;
2. Upload Files to Stage: Use SnowSQL or another Snowflake client to put files onto the created stage.PUT file://C:\User1\Desktop\File* @my_internal_stage;
3. Copy Data into Table: Load the data from the stage into the target table.
COPY INTO my_table FROM @my_internal_stage;
Real-World Use Case: A company may need to upload daily logs from their local servers to Snowflake for analysis. They would create a stage, use the PUT command to upload these logs, and then use the COPY INTO command to load them into a table for querying.