What does Snowflake's search optimization service support?
External tables
Materialized views
Tables and views that are not protected by row access policies
Casts on table columns (except for fixed-point numbers cast to strings)
Snowflake’s search optimization service supports tables and views that are not protected by row access policies. It is designed to improve the performance of certain types of queries on tables, including selective point lookup queries and queries on fields in VARIANT, OBJECT, and ARRAY (semi-structured) columns1.
Which task privilege does a Snowflake role need in order to suspend or resume a task?
USAGE
OPERATE
MONITOR
OWNERSHIP
 In Snowflake, the OPERATE privilege is required for a role to suspend or resume a task. This privilege allows the role to perform operational tasks such as starting and stopping tasks, which includes suspending and resuming them6
Which of the following describes the Snowflake Cloud Services layer?
Coordinates activities in the Snowflake account
Executes queries submitted by the Snowflake account users
Manages quotas on the Snowflake account storage
Manages the virtual warehouse cache to speed up queries
The Snowflake Cloud Services layer coordinates activities within the Snowflake account. It is responsible for tasks such as authentication, infrastructure management, metadata management, query parsing and optimization, and access control. References: Based on general cloud database architecture knowledge.
Which clients does Snowflake support Multi-Factor Authentication (MFA) token caching for? (Select TWO).
GO driver
Node.js driver
ODBC driver
Python connector
Spark connector
 Multi-Factor Authentication (MFA) token caching is typically supported for clients that maintain a persistent connection or session with Snowflake, such as the ODBC driver and Python connector, to reduce the need for repeated MFA challenges. References: Based on general security practices in cloud services as of 2021.
What happens when a Snowflake user changes the data retention period at the schema level?
All child objects will retain data for the new retention period.
All child objects that do not have an explicit retention period will automatically inherit the new retention period.
All child objects with an explicit retention period will be overridden with the new retention period.
All explicit child object retention periods will remain unchanged.
When the data retention period is changed at the schema level, all child objects that do not have an explicit retention period set will inherit the new retention period from the schema4.
How can a Snowflake user access a JSON object, given the following table? (Select TWO).
src:salesperson.name
src:sa1esPerson. name
src:salesperson.Name
SRC:salesperson.name
SRC:salesperson.Name
To access a JSON object in Snowflake, dot notation is used where the path to the object is specified after the column name containing the JSON data. Both lowercase and uppercase can be used for attribute names, so both “name†and “Name†are valid. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What does a Notify & Suspend action for a resource monitor do?
Send an alert notification to all account users who have notifications enabled.
Send an alert notification to all virtual warehouse users when thresholds over 100% have been met.
Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses after all statements being executed by the warehouses have completed.
Send a notification to all account administrators who have notifications enabled, and suspend all assigned warehouses immediately, canceling any statements being executed by the warehouses.
The Notify & Suspend action for a resource monitor in Snowflake sends a notification to all account administrators who have notifications enabled and suspends all assigned warehouses. However, the suspension only occurs after all currently running statements in the warehouses have been completed1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A company needs to read multiple terabytes of data for an initial load as part of a Snowflake migration. The company can control the number and size of CSV extract files.
How does Snowflake recommend maximizing the load performance?
Use auto-ingest Snowpipes to load large files in a serverless model.
Produce the largest files possible, reducing the overall number of files to process.
Produce a larger number of smaller files and process the ingestion with size Small virtual warehouses.
Use an external tool to issue batched row-by-row inserts within BEGIN TRANSACTION and COMMIT commands.
Snowflake’s documentation recommends producing the largest files possible for data loading, as larger files reduce the number of files to process and the overhead associated with handling many small files. This approach can maximize the load performance by leveraging Snowflake’s ability to ingest large files efficiently1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What function can be used with the recursive argument to return a list of distinct key names in all nested elements in an object?
FLATTEN
GET_PATH
CHECK_JSON
PARSE JSON
The FLATTEN function can be used with the recursive argument to return a list of distinct key names in all nested elements within an object. This function is particularly useful for working with semi-structured data in Snowflake
What objects in Snowflake are supported by Dynamic Data Masking? (Select TWO).'
Views
Materialized views
Tables
External tables
Future grants
Dynamic Data Masking in Snowflake supports tables and views. These objects can have masking policies applied to their columns to dynamically mask data at query time3.
Which object can be used with Secure Data Sharing?
View
Materialized view
External table
User-Defined Function (UDF)
Views can be used with Secure Data Sharing in Snowflake. Materialized views, external tables, and UDFs are not typically shared directly for security and performance reasons2.
When using the ALLOW CLIENT_MFA_CACHING parameter, how long is a cached Multi-Factor Authentication (MFA) token valid for?
1 hour
2 hours
4 hours
8 hours
When using the ALLOW_CLIENT_MFA_CACHING parameter, a cached Multi-Factor Authentication (MFA) token is valid for up to 4 hours. This allows for continuous, secure connectivity without users needing to respond to an MFA prompt at the start of each connection attempt to Snowflake within this timeframe2.
Which solution improves the performance of point lookup queries that return a small number of rows from large tables using highly selective filters?
Automatic clustering
Materialized views
Query acceleration service
Search optimization service
The search optimization service improves the performance of point lookup queries on large tables by using selective filters to quickly return a small number of rows. It creates an optimized data structure that helps in pruning the micro-partitions that do not contain the queried values3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake view is used to support compliance auditing?
ACCESS_HISTORY
COPY_HISTORY
QUERY_HISTORY
ROW ACCESS POLICIES
The ACCESS_HISTORY view in Snowflake is utilized to support compliance auditing. It provides detailed information on data access within Snowflake, including reads and writes by user queries. This view is essential for regulatory compliance auditing as it offers insights into the usage of tables and columns, and maintains a direct link between the user, the query, and the accessed data1.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
How can a Snowflake administrator determine which user has accessed a database object that contains sensitive information?
Review the granted privileges to the database object.
Review the row access policy for the database object.
Query the ACCESS_HlSTORY view in the ACCOUNT_USAGE schema.
Query the REPLICATION USAGE HISTORY view in the ORGANIZATION USAGE schema.
To determine which user has accessed a database object containing sensitive information, a Snowflake administrator can query the ACCESS_HISTORY view in the ACCOUNT_USAGE schema, which provides information about access to database objects3.
Which data types can be used in Snowflake to store semi-structured data? (Select TWO)
ARRAY
BLOB
CLOB
JSON
VARIANT
 Snowflake supports the storage of semi-structured data using the ARRAY and VARIANT data types. The ARRAY data type can directly contain VARIANT, and thus indirectly contain any other data type, including itself. The VARIANT data type can store a value of any other type, including OBJECT and ARRAY, and is often used to represent semi-structured data formats like JSON, Avro, ORC, Parquet, or XML34.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake data types can be used to build nested hierarchical data? (Select TWO)
INTEGER
OBJECT
VARIANT
VARCHAR
LIST
The Snowflake data types that can be used to build nested hierarchical data are OBJECT and VARIANT. These data types support the storage and querying of semi-structured data, allowing for the creation of complex, nested data structures
A user wants to access files stored in a stage without authenticating into Snowflake. Which type of URL should be used?
File URL
Staged URL
Scoped URL
Pre-signed URL
 A Pre-signed URL should be used to access files stored in a Snowflake stage without requiring authentication into Snowflake. Pre-signed URLs are simple HTTPS URLs that provide temporary access to a file via a web browser, using a pre-signed access token. The expiration time for the access token is configurable, and this type of URL allows users or applications to directly access or download the files without needing to authenticate into Snowflake5.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
What is the minimum Snowflake Edition that supports secure storage of Protected Health Information (PHI) data?
Standard Edition
Enterprise Edition
Business Critical Edition
Virtual Private Snowflake Edition
The minimum Snowflake Edition that supports secure storage of Protected Health Information (PHI) data is the Business Critical Edition. This edition offers enhanced security features necessary for compliance with regulations such as HIPAA and HITRUST CSF4.
For which use cases is running a virtual warehouse required? (Select TWO).
When creating a table
When loading data into a table
When unloading data from a table
When executing a show command
When executing a list command
Running a virtual warehouse is required when loading data into a table and when unloading data from a table because these operations require compute resources that are provided by the virtual warehouse23.
Which Snowflake table objects can be shared with other accounts? (Select TWO).
Temporary tables
Permanent tables
Transient tables
External tables
User-Defined Table Functions (UDTFs)
In Snowflake, permanent tables and external tables can be shared with other accounts using Secure Data Sharing. Temporary tables, transient tables, and UDTFs are not shareable objects
A permanent table and temporary table have the same name, TBL1, in a schema.
What will happen if a user executes select * from TBL1 ;?
The temporary table will take precedence over the permanent table.
The permanent table will take precedence over the temporary table.
An error will say there cannot be two tables with the same name in a schema.
The table that was created most recently will take precedence over the older table.
 In Snowflake, if a temporary table and a permanent table have the same name within the same schema, the temporary table takes precedence over the permanent table within the session where the temporary table was created4.
What happens to the objects in a reader account when the DROP MANAGED ACCOUNT command is executed?
The objects are dropped.
The objects enter the Fail-safe period.
The objects enter the Time Travel period.
The objects are immediately moved to the provider account.
 When the DROP MANAGED ACCOUNT command is executed in Snowflake, it removes the managed account, including all objects created within the account, and access to the account is immediately restricted2.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
How is unstructured data retrieved from data storage?
SQL functions like the GET command can be used to copy the unstructured data to a location on the client.
SQL functions can be used to create different types of URLs pointing to the unstructured data. These URLs can be used to download the data to a client.
SQL functions can be used to retrieve the data from the query results cache. When the query results are output to a client, the unstructured data will be output to the client as files.
SQL functions can call on different web extensions designed to display different types of files as a web page. The web extensions will allow the files to be downloaded to the client.
Unstructured data stored in Snowflake can be retrieved by using SQL functions to generate URLs that point to the data. These URLs can then be used to download the data directly to a client
What are key characteristics of virtual warehouses in Snowflake? (Select TWO).
Warehouses that are multi-cluster can have nodes of different sizes.
Warehouses can be started and stopped at any time.
Warehouses can be resized at any time, even while running.
Warehouses are billed on a per-minute usage basis.
Warehouses can only be used for querying and cannot be used for data loading.
 Virtual warehouses in Snowflake can be started and stopped at any time, providing flexibility in managing compute resources. They can also be resized at any time, even while running, to accommodate varying workloads910. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake function is maintained separately from the data and helps to support features such as Time Travel, Secure Data Sharing, and pruning?
Column compression
Data clustering
Micro-partitioning
Metadata management
Micro-partitioning is a Snowflake function that is maintained separately from the data and supports features such as Time Travel, Secure Data Sharing, and pruning. It allows Snowflake to efficiently manage and query large datasets by organizing them into micro-partitions1.
What is the purpose of the STRIP NULL_VALUES file format option when loading semi-structured data files into Snowflake?
It removes null values from all columns in the data.
It converts null values to empty strings during loading.
It skips rows with null values during the loading process.
It removes object or array elements containing null values.
The STRIP NULL_VALUES file format option, when set to TRUE, removes object or array elements that contain null values during the loading process of semi-structured data files into Snowflake. This ensures that the data loaded into Snowflake tables does not contain these null elements, which can be useful when the “null†values in files indicate missing values and have no other special meaning2.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
What factors impact storage costs in Snowflake? (Select TWO).
The account type
The storage file format
The cloud region used by the account
The type of data being stored
The cloud platform being used
 The factors that impact storage costs in Snowflake include the account type (Capacity or On Demand) and the cloud region used by the account. These factors determine the rate at which storage is billed, with different regions potentially having different rates3.
When enabling access to unstructured data, which URL permits temporary access to a staged file without the need to grant privileges to the stage or to issue access tokens?
File URL
Scoped URL
Relative URL
Pre-Signed URL
 A Scoped URL permits temporary access to a staged file without the need to grant privileges to the stage or to issue access tokens. It provides a secure way to share access to files stored in Snowflake
Which Snowflake function will parse a JSON-null into a SQL-null?
TO_CHAR
TO_VARIANT
TO_VARCHAR
STRIP NULL VALUE
The STRIP_NULL_VALUE function in Snowflake is used to convert a JSON null value into a SQL NULL value1.
What is the relationship between a Query Profile and a virtual warehouse?
A Query Profile can help users right-size virtual warehouses.
A Query Profile defines the hardware specifications of the virtual warehouse.
A Query Profile can help determine the number of virtual warehouses available.
A Query Profile automatically scales the virtual warehouse based on the query complexity.
 A Query Profile provides detailed execution information for a query, which can be used to analyze the performance and behavior of queries. This information can help users optimize and right-size their virtual warehouses for better efficiency. References: [COF-C02] SnowPro Core Certification Exam Study Guide
While working with unstructured data, which file function generates a Snowflake-hosted file URL to a staged file using the stage name and relative file path as inputs?
GET_PRESIGNED_URL
GET_ABSOLUTE_PATH
BUILD_STAGE_FILE_URL
BUILD SCOPED FILE URL
The BUILD_STAGE_FILE_URL function generates a Snowflake-hosted file URL to a staged file using the stage name and relative file path as inputs2.
Which command is used to start configuring Snowflake for Single Sign-On (SSO)?
CREATE SESSION POLICY
CREATE NETWORK RULE
CREATE SECURITY INTEGRATION
CREATE PASSWORD POLICY
To start configuring Snowflake for Single Sign-On (SSO), the CREATE SECURITY INTEGRATION command is used. This command sets up a security integration object in Snowflake, which is necessary for enabling SSO with external identity providers using SAML 2.01.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
How can a dropped internal stage be restored?
Enable Time Travel.
Clone the dropped stage.
Execute the UNDROP command.
Recreate the dropped stage.
 Once an internal stage is dropped in Snowflake, it cannot be recovered or restored using Time Travel or UNDROP commands. The only option is to recreate the dropped stage
Which views are included in the DATA SHARING USAGE schema? (Select TWO).
ACCESS_HISTORY
DATA_TRANSFER_HISTORY
WAREHOUSE_METERING_HISTORY
MONETIZED_USAGE_DAILY
LISTING TELEMETRY DAILY
The DATA_SHARING_USAGE schema includes views that display information about listings published in the Snowflake Marketplace or a data exchange, which includes DATA_TRANSFER_HISTORY and LISTING_TELEMETRY_DAILY2.
What is the primary purpose of a directory table in Snowflake?
To store actual data from external stages
To automatically expire file URLs for security
To manage user privileges and access control
To store file-level metadata about data files in a stage
A directory table in Snowflake is used to store file-level metadata about the data files in a stage. It is conceptually similar to an external table and provides information such as file size, last modified timestamp, and file URL. References: [COF-C02] SnowPro Core Certification Exam Study Guide
How can performance be optimized for a query that returns a small amount of data from a very large base table?
Use clustering keys
Create materialized views
Use the search optimization service
Use the query acceleration service
The search optimization service in Snowflake is designed to improve the performance of selective point lookup queries on large tables, which is ideal for scenarios where a query returns a small amount of data from a very large base table1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What information is found within the Statistic output in the Query Profile Overview?
Operator tree
Table pruning
Most expensive nodes
Nodes by execution time
The Statistic output in the Query Profile Overview of Snowflake provides detailed insights into the performance of different parts of the query. Specifically, it highlights the "Most expensive nodes," which are the operations or steps within the query execution that consume the most resources, such as CPU and memory. Identifying these nodes helps in pinpointing performance bottlenecks and optimizing query execution by focusing efforts on the most resource-intensive parts of the query.
References:
Snowflake Documentation on Query Profile Overview: It details the components of the profile overview, emphasizing how to interpret the statistics section to improve query performance by understanding which nodes are most resource-intensive.
QUSTION NO: 582
How do secure views compare to non-secure views in Snowflake?
A. Secure views execute slowly compared to non-secure views.
B. Non-secure views are preferred over secure views when sharing data.
C. Secure views are similar to materialized views in that they are the most performant.
D. There are no performance differences between secure and non-secure views.
Answer: D
Secure views and non-secure views in Snowflake are differentiated primarily by their handling of data access and security rather than performance characteristics. A secure view enforces row-level security and ensures that the view definition is hidden from the users. However, in terms of performance, secure views do not inherently execute slower or faster than non-secure views. The performance of both types of views depends more on other factors such as underlying table design, query complexity, and system workload rather than the security features embedded in the views themselves.
References:
Snowflake Documentation on Views: This section provides an overview of both secure and non-secure views, clarifying that the main difference lies in security features rather than performance, thus supporting the assertion that there are no inherent performance differences.
QUSTION NO: 583
When using SnowSQL, which configuration options are required when unloading data from a SQL query run on a local machine? {Select TWO).
A. echo
B. quiet
C. output_file
D. output_format
E. force_put_overwrite
Answer: C, D
When unloading data from SnowSQL (Snowflake's command-line client), to a file on a local machine, you need to specify certain configuration options to determine how and where the data should be outputted. The correct configuration options required are:
C. output_file: This configuration option specifies the file path where the output from the query should be stored. It is essential for directing the results of your SQL query into a local file, rather than just displaying it on the screen.
D. output_format: This option determines the format of the output file (e.g., CSV, JSON, etc.). It is crucial for ensuring that the data is unloaded in a structured format that meets the requirements of downstream processes or systems.
These options are specified in the SnowSQL configuration file or directly in the SnowSQL command line. The configuration file allows users to set defaults and customize their usage of SnowSQL, including output preferences for unloading data.
References:
Snowflake Documentation: SnowSQL (CLI Client) at Snowflake Documentation
Snowflake Documentation: Configuring SnowSQL at Snowflake Documentation
QUSTION NO: 584
How can a Snowflake user post-process the result of SHOW FILE FORMATS?
A. Use the RESULT_SCAN function.
B. Create a CURSOR for the command.
C. Put it in the FROM clause in brackets.
D. Assign the command to RESULTSET.
Answer: A
first run SHOW FILE FORMATS
then SELECT * FROM TABLE(RESULT_SCAN(LAST_QUERY_ID(-1)))
https://docs.snowflake.com/en/sql-reference/functions/result_scan#usage-notes
QUSTION NO: 585
Which file function gives a user or application access to download unstructured data from a Snowflake stage?
A. BUILD_SCOPED_FILE_URL
B. BUILD_STAGE_FILE_URL
C. GET_PRESIGNED_URL
D. GET STAGE LOCATION
Answer: C
The function that provides access to download unstructured data from a Snowflake stage is:
C. GET_PRESIGNED_URL: This function generates a presigned URL for a single file within a stage. The generated URL can be used to directly access or download the file without needing to go through Snowflake. This is particularly useful for unstructured data such as images, videos, or large text files, where direct access via a URL is needed outside of the Snowflake environment.
Example usage:
SELECT GET_PRESIGNED_URL('stage_name', 'file_path');
This function simplifies the process of securely sharing or accessing files stored in Snowflake stages with external systems or users.
References:
Snowflake Documentation: GET_PRESIGNED_URL Function at Snowflake Documentation
QUSTION NO: 586
When should a multi-cluster virtual warehouse be used in Snowflake?
A. When queuing is delaying query execution on the warehouse
B. When there is significant disk spilling shown on the Query Profile
C. When dynamic vertical scaling is being used in the warehouse
D. When there are no concurrent queries running on the warehouse
Answer: A
A multi-cluster virtual warehouse in Snowflake is designed to handle high concurrency and workload demands by allowing multiple clusters of compute resources to operate simultaneously. The correct scenario to use a multi-cluster virtual warehouse is:
A. When queuing is delaying query execution on the warehouse: Multi-cluster warehouses are ideal when the demand for compute resources exceeds the capacity of a single cluster, leading to query queuing. By enabling additional clusters, you can distribute the workload across multiple compute clusters, thereby reducing queuing and improving query performance.
This is especially useful in scenarios with fluctuating workloads or where it's critical to maintain low response times for a large number of concurrent queries.
References:
Snowflake Documentation: Multi-Cluster Warehouses at Snowflake Documentation
QUSTION NO: 587
A JSON object is loaded into a column named data using a Snowflake variant datatype. The root node of the object is BIKE. The child attribute for this root node is BIKEID.
Which statement will allow the user to access BIKEID?
A. select data:BIKEID
B. select data.BIKE.BIKEID
C. select data:BIKE.BIKEID
D. select data:BIKE:BIKEID
Answer: C
In Snowflake, when accessing elements within a JSON object stored in a variant column, the correct syntax involves using a colon (:) to navigate the JSON structure. The BIKEID attribute, which is a child of the BIKE root node in the JSON object, is accessed using data:BIKE.BIKEID. This syntax correctly references the path through the JSON object, utilizing the colon for JSON field access and dot notation to traverse the hierarchy within the variant structure.References: Snowflake documentation on accessing semi-structured data, which outlines how to use the colon and dot notations for navigating JSON structures stored in variant columns.
QUSTION NO: 588
Which Snowflake tool is recommended for data batch processing?
A. SnowCD
B. SnowSQL
C. Snowsight
D. The Snowflake API
Answer: B
For data batch processing in Snowflake, the recommended tool is:
B. SnowSQL: SnowSQL is the command-line client for Snowflake. It allows for executing SQL queries, scripts, and managing database objects. It's particularly suitable for batch processing tasks due to its ability to run SQL scripts that can execute multiple commands or queries in sequence, making it ideal for automated or scheduled tasks that require bulk data operations.
SnowSQL provides a flexible and powerful way to interact with Snowflake, supporting operations such as loading and unloading data, executing complex queries, and managing Snowflake objects from the command line or through scripts.
References:
Snowflake Documentation: SnowSQL (CLI Client) at Snowflake Documentation
QUSTION NO: 589
How does the Snowflake search optimization service improve query performance?
A. It improves the performance of range searches.
B. It defines different clustering keys on the same source table.
C. It improves the performance of all queries running against a given table.
D. It improves the performance of equality searches.
Answer: D
The Snowflake Search Optimization Service is designed to enhance the performance of specific types of queries on large tables. The correct answer is:
D. It improves the performance of equality searches: The service optimizes the performance of queries that use equality search conditions (e.g., WHERE column = value). It creates and maintains a search index on the table's columns, which significantly speeds up the retrieval of rows based on those equality search conditions.
This optimization is particularly beneficial for large tables where traditional scans might be inefficient for equality searches. By using the Search Optimization Service, Snowflake can leverage the search indexes to quickly locate the rows that match the search criteria without scanning the entire table.
References:
Snowflake Documentation: Search Optimization Service at Snowflake Documentation
QUSTION NO: 590
What compute resource is used when loading data using Snowpipe?
A. Snowpipe uses virtual warehouses provided by the user.
B. Snowpipe uses an Apache Kafka server for its compute resources.
C. Snowpipe uses compute resources provided by Snowflake.
D. Snowpipe uses cloud platform compute resources provided by the user.
Answer: C
Snowpipe is Snowflake's continuous data ingestion service that allows for loading data as soon as it's available in a cloud storage stage. Snowpipe uses compute resources managed by Snowflake, separate from the virtual warehouses that users create for querying data. This means that Snowpipe operations do not consume the computational credits of user-created virtual warehouses, offering an efficient and cost-effective way to continuously load data into Snowflake.
References:
Snowflake Documentation: Understanding Snowpipe
QUSTION NO: 591
What is one of the characteristics of data shares?
A. Data shares support full DML operations.
B. Data shares work by copying data to consumer accounts.
C. Data shares utilize secure views for sharing view objects.
D. Data shares are cloud agnostic and can cross regions by default.
Answer: C
Data sharing in Snowflake allows for live, read-only access to data across different Snowflake accounts without the need to copy or transfer the data. One of the characteristics of data shares is the ability to use secure views. Secure views are used within data shares to restrict the access of shared data, ensuring that consumers can only see the data that the provider intends to share, thereby preserving privacy and security.
References:
Snowflake Documentation: Understanding Secure Views in Data Sharing
QUSTION NO: 592
Which DDL/DML operation is allowed on an inbound data share?
A. ALTER TA3LE
B. INSERT INTO
C. MERGE
D. SELECT
Answer: D
In Snowflake, an inbound data share refers to the data shared with an account by another account. The only DDL/DML operation allowed on an inbound data share is SELECT. This restriction ensures that the shared data remains read-only for the consuming account, maintaining the integrity and ownership of the data by the sharing account.
References:
Snowflake Documentation: Using Data Shares
QUSTION NO: 593
In Snowflake, the use of federated authentication enables which Single Sign-On (SSO) workflow activities? (Select TWO).
A. Authorizing users
B. Initiating user sessions
C. Logging into Snowflake
D. Logging out of Snowflake
E. Performing role authentication
Answer: B C
Federated authentication in Snowflake allows users to use their organizational credentials to log in to Snowflake, leveraging Single Sign-On (SSO). The key activities enabled by this setup include:
B. Initiating user sessions: Federated authentication streamlines the process of starting a user session in Snowflake by using the existing authentication mechanisms of an organization.
C. Logging into Snowflake: It simplifies the login process, allowing users to authenticate with their organization's identity provider instead of managing separate credentials for Snowflake.
References:
Snowflake Documentation: Configuring Federated Authentication
QUSTION NO: 594
A user wants to upload a file to an internal Snowflake stage using a put command.
Which tools and or connectors could be used to execute this command? (Select TWO).
A. SnowCD
B. SnowSQL
C. SQL API
D. Python connector
E. Snowsight worksheets
Answer: B, E
To upload a file to an internal Snowflake stage using a PUT command, you can use:
B. SnowSQL: SnowSQL, the command-line client for Snowflake, supports the PUT command, allowing users to upload files directly to Snowflake stages from their local file systems.
E. Snowsight worksheets: Snowsight, the web interface for Snowflake, provides a user-friendly environment for executing SQL commands, including the PUT command, through its interactive worksheets.
References:
Snowflake Documentation: Loading Data into Snowflake using SnowSQL
Snowflake Documentation: Using Snowsight
Which statistics can be used to identify queries that have inefficient pruning? (Select TWO).
Bytes scanned
Bytes written to result
Partitions scanned
Partitions total
Percentage scanned from cache
The statistics that can be used to identify queries with inefficient pruning are ‘Partitions scanned’ and ‘Partitions total’. These statistics indicate how much of the data was actually needed and scanned versus the total available, which can highlight inefficiencies in data pruning34.
What tasks can an account administrator perform in the Data Exchange? (Select TWO).
Add and remove members.
Delete data categories.
Approve and deny listing approval requests.
Transfer listing ownership.
Transfer ownership of a provider profile.
 An account administrator in the Data Exchange can perform tasks such as adding and removing members and approving or denying listing approval requests. These tasks are part of managing the Data Exchange and ensuring that only authorized listings and members are part of it12.
Based on Snowflake recommendations, when creating a hierarchy of custom roles, the top-most custom role should be assigned to which role?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
USERADMIN
Based on Snowflake recommendations, when creating a hierarchy of custom roles, the top-most custom role should ideally be granted to the ACCOUNTADMIN role. This recommendation stems from the best practices for implementing a least privilege access control model, ensuring that only the necessary permissions are granted at each level of the role hierarchy. The ACCOUNTADMIN role has the highest level of privileges in Snowflake, including the ability to manage all aspects of the Snowflake account. By assigning the top-most custom role to ACCOUNTADMIN, you ensure that the administration of role hierarchies and the assignment of roles remain under the control of users with the highest level of oversight and responsibility within the Snowflake environment.
References:
Snowflake Documentation on Access Control: Managing Access Control
What are characteristics of the ownership privilege when it is granted on a regular Snowflake schema? (Select TWO).
It is automatically granted to the role that creates a database object within the schema.
It allows a role to manage grants on the schema.
It can be transferred from one role to another for a specific schema.
It grants the ability to query data from the schema.
It must be granted to a role in order to alter warehouse settings.
In Snowflake, the ownership privilege for a schema includes:
Automatic granting to the creator’s role: The role that creates a database object within the schema automatically receives ownership of that object.
Ability to manage grants: Ownership enables the role to manage permissions and grants on the schema and its objects, allowing them to control access at the schema level.
Ownership does not directly confer query privileges or the ability to alter warehouse settings, nor is it transferable without specific privilege management actions.
How can the Query Profile be used to troubleshoot a problematic query?
It will indicate if a virtual warehouse memory is too small to run the query
It will indicate if a user lacks the privileges needed to run the query.
It will indicate if a virtual warehouse is in auto-scale mode
It will indicate if the user has enough Snowflake credits to run the query
The Query Profile in Snowflake provides detailed insights into the execution of a query. It helps in troubleshooting performance issues by showing the steps of the query execution and the resources consumed. One of the key aspects it reveals is whether the virtual warehouse memory was sufficient for the query.
Access Query Profile: Navigate to the Query History page and select the query you want to analyze.
Examine Query Execution Steps: The Query Profile displays the different stages of the query execution, including the time taken and resources used at each step.
Identify Memory Issues: Look for indicators of memory issues, such as spilling to disk or memory errors, which suggest that the virtual warehouse memory might be too small.
References:
Snowflake Documentation: Query Profile
Snowflake Documentation: Optimizing Queries
Which ACCOUNT_USAGE schema database role provides visibility into policy-related information?
USAGE_VIEWER
GOVERNANCE_VIEWER
OBJECT_VIEWER
SECURITY_VIEWER
The GOVERNANCE_VIEWER role in the ACCOUNT_USAGE schema provides visibility into policy-related information within Snowflake. This role is specifically designed to access views that display object metadata and usage metrics related to governance12.
Which function returns the URL of a stage using the stage name as the input?
BUILD_STAGE_FILE_URL
BUILD_SCOPED_FILE_URL
GET_PRESIGNED_URL
GET STAGE LOCATION
The function in Snowflake that returns the URL of a stage using the stage name as the input is C. GET_PRESIGNED_URL. This function generates a pre-signed URL for a specific file in a stage, enabling secure, temporary access to that file without requiring Snowflake credentials. While the function is primarily used for accessing files in external stages, such as Amazon S3 buckets, it is instrumental in scenarios requiring direct, secure file access for a limited time.
It's important to note that as of my last update, Snowflake's documentation does not specifically list a function named GET_PRESIGNED_URL for directly obtaining a stage's URL by its name. The description aligns closely with functionality available in cloud storage services (e.g., AWS S3's presigned URLs) which can be used in conjunction with Snowflake stages for secure, temporary access to files. For direct interaction with stages and their files, Snowflake offers various functions and commands, but the exact match for generating a presigned URL through a simple function call may vary or require leveraging external cloud services APIs in addition to Snowflake's capabilities.
References:
Snowflake Documentation and cloud services (AWS, Azure, GCP) documentation on presigned URLs and stage interactions.
Which Snowflake object can be used to record DML changes made to a table?
Snowpipe
Stage
Stream
Task
Snowflake Streams are used to track and record Data Manipulation Language (DML) changes made to a table. Streams capture changes such as inserts, updates, and deletes, which can then be processed by other Snowflake objects or external applications.
Creating a Stream:
CREATE OR REPLACE STREAM my_stream ON TABLE my_table;
Using Streams: Streams provide a way to process changes incrementally, making it easier to build efficient data pipelines.
Consuming Stream Data: The captured changes can be consumed using SQL queries or Snowflake tasks.
References:
Snowflake Documentation: Using Streams
Snowflake Documentation: Change Data Capture (CDC) with Streams
Which types of subqueries does Snowflake support? (Select TWO).
Uncorrelated scalar subqueries in WHERE clauses
Uncorrelated scalar subqueries in any place that a value expression can be used
EXISTS, ANY / ALL, and IN subqueries in WHERE clauses: these subqueries can be uncorrelated only
EXISTS, ANY / ALL, and IN subqueries in where clauses: these subqueries can be correlated only
EXISTS, ANY /ALL, and IN subqueries in WHERE clauses: these subqueries can be correlated or uncorrelated
Snowflake supports a variety of subquery types, including both correlated and uncorrelated subqueries. The correct answers are B and E, which highlight Snowflake's flexibility in handling subqueries within SQL queries.
Uncorrelated Scalar Subqueries: These are subqueries that can execute independently of the outer query. They return a single value and can be used anywhere a value expression is allowed, offering great flexibility in SQL queries.
EXISTS, ANY/ALL, and IN Subqueries: These subqueries are used in WHERE clauses to filter the results of the main query based on the presence or absence of matching rows in a subquery. Snowflake supports both correlated and uncorrelated versions of these subqueries, providing powerful tools for complex data analysis scenarios.
Examples and Usage:
Uncorrelated Scalar Subquery:
SELECT * FROM employees WHERE salary > (SELECT AVG(salary) FROM employees);
Correlated EXISTS Subquery:
SELECT * FROM orders o WHERE EXISTS (SELECT 1 FROM customer c WHERE c.id = o.customer_id AND c.region = 'North America');
When snaring data in Snowflake. what privileges does a Provider need to grant along with a share? (Select TWO).
USAGE on the specific tables in the database.
USAGE on the specific tables in the database.
MODIFY on 1Mb specific tables in the database.
USAGE on the database and the schema containing the tables to share
OPEBATE on the database and the schema containing the tables to share.
When sharing data in Snowflake, the provider needs to grant the following privileges along with a share:
A. USAGE on the specific tables in the database: This privilege allows the consumers of the share to access the specific tables included in the share.
D. USAGE on the database and the schema containing the tables to share: This privilege is necessary for the consumers to access the database and schema levels, enabling them to access the tables within those schemas.
These privileges are crucial for setting up secure and controlled access to the shared data, ensuring that only authorized users can access the specified resources.
Reference to Snowflake documentation on sharing data and managing access:
Data Sharing Overview
Privileges Required for Sharing Data
Use of which file function allows a user to share unstructured data from an internal stage with an external reporting tool that does not have access to Snowflake">
BUILD_SCOPED_FILE_URL
GET_PRESIGNED_URL
BUILD_STAGE_FILE_URL
GET_STAGE_LOCATION
The GET_PRESIGNED_URL function in Snowflake generates a pre-signed URL for a file in an internal stage. This URL can be shared with external tools or users who do not have direct access to Snowflake, allowing them to download the file.
Generate Pre-Signed URL:
SELECT GET_PRESIGNED_URL(@my_stage/file.txt);
Share the URL: The generated URL can be shared with external users or applications, enabling them to access the file directly.
References:
Snowflake Documentation: GET_PRESIGNED_URL
Snowflake Documentation: Working with Stages
Who can create and manage reader accounts? (Select TWO).
A user with ACCOUNTADMIN role
A user with securityadmin role
A user with SYSADMIN role
A user with ORGADMIH role
A user with CREATE ACCOUNT privilege
In Snowflake, reader accounts are special types of accounts that allow data sharing with external consumers without them having their own Snowflake account. The creation and management of reader accounts can be performed by users with the ACCOUNTADMIN role or the ORGADMIN role. The ACCOUNTADMIN role has comprehensive administrative privileges within a Snowflake account, including managing other accounts and roles. The ORGADMIN role, which is higher in hierarchy, oversees multiple accounts within an organization and can manage reader accounts across those accounts.
References:
Snowflake Documentation: Creating and Managing Reader Accounts
Which governance feature is supported by all Snowflake editions?
Object tags
Masking policies
Row access policies
OBJECT_DEPENDENCIES View
Snowflake's governance features vary across different editions, but the OBJECT_DEPENDENCIES view is supported by all Snowflake editions. This feature is part of Snowflake's Information Schema and is designed to help users understand the dependencies between various objects in their Snowflake environment.
The OBJECT_DEPENDENCIES view provides a way to query and analyze the relationships and dependencies among different database objects, such as tables, views, and stored procedures. This is crucial for governance, as it allows administrators and data engineers to assess the impact of changes, understand object relationships, and ensure proper management of data assets.
Object tags, masking policies, and row access policies are more advanced features that offer fine-grained data governance capabilities such as tagging objects for classification, dynamically masking sensitive data based on user roles, and controlling row-level access to data. These features may have varying levels of support across different Snowflake editions, with some features being exclusive to higher-tier editions.
Which Snowflake table type is only visible to the user who creates it, can have the same name as permanent tables in the same schema, and is dropped at the end of the session?
Temporary
Local
User
Transient
In Snowflake, a Temporary table is a type of table that is only visible to the user who creates it, can have the same name as permanent tables in the same schema, and is automatically dropped at the end of the session in which it was created. Temporary tables are designed for transient data processing needs, where data is needed for the duration of a specific task or session but not beyond. Since they are automatically cleaned up at the end of the session, they help manage storage usage efficiently and ensure that sensitive data is not inadvertently persisted.
References:
Snowflake Documentation on Temporary Tables: Temporary Tables
What command is used to export or unload data from Snowflake?
PUT @mystage
GET @mystage
COPY INTO @mystage
INSERT @mystage
The command used to export or unload data from Snowflake to a stage (such as a file in an S3 bucket, Azure Blob Storage, or Google Cloud Storage) is the PUT command. The PUT command is designed to upload data files from a local file system (in the case of SnowSQL or other client) or a virtual warehouse to a specified stage. This functionality is critical for scenarios where data needs to be extracted from Snowflake for use in external systems, backups, or further processing.
The syntax for the PUT command follows the structure: PUT file://<local_file_path> @<stage_name>, where <local_file_path> specifies the path to the file(s) on the local file system that you wish to upload, and <stage_name> specifies the destination stage in Snowflake.
It's important to distinguish that the PUT command is used for exporting data out of Snowflake, whereas the COPY INTO <table> command is used for importing data into Snowflake from a stage. The GET command, on the other hand, is used to download files from a stage to the local file system, essentially the inverse operation of the PUT command.
References:
Snowflake Documentation on Loading and Unloading Data: [Loading and Unloading Data](https://docs.snowflake.com/en/user-guide/data-load
What does Snowflake recommend as a best practice for using secure views?
Use sequence-gen era ted values
Programmatically reveal the identifiers.
Use secure views solely for query convenience.
Do not expose the sequence-generated column(s)
Snowflake recommends not exposing sequence-generated columns in secure views. Secure views are used to protect sensitive data by ensuring that users can only access data for which they have permissions. Exposing sequence-generated columns can potentially reveal information about the underlying data structure or the number of rows, which might be sensitive.
Create Secure Views: Define secure views using the SECURE keyword to ensure they comply with Snowflake's security policies.
Exclude Sensitive Columns: When creating secure views, exclude columns that might expose sensitive information, such as sequence-generated columns.
CREATE SECURE VIEW secure_view AS
SELECT col1, col2
FROM sensitive_table
WHERE sensitive_column IS NOT NULL;
References:
Snowflake Documentation: Secure Views
Snowflake Documentation: Creating Secure Views
These answers and explanations should provide comprehensive guidance on the specified Snowflake topics.
Which virtual warehouse consideration can help lower compute resource credit consumption?
Setting up a multi-cluster virtual warehouse
Resizing the virtual warehouse to a larger size
Automating the virtual warehouse suspension and resumption settings
Increasing the maximum cluster count parameter for a multi-cluster virtual warehouse
One key strategy to lower compute resource credit consumption in Snowflake is by automating the suspension and resumption of virtual warehouses. Virtual warehouses consume credits when they are running, and managing their operational times effectively can lead to significant cost savings.
A. Setting up a multi-cluster virtual warehouse increases parallelism and throughput but does not directly lower credit consumption. It is more about performance scaling than cost efficiency.
B. Resizing the virtual warehouse to a larger size increases the compute resources available for processing queries, which increases the credit consumption rate. This option does not help in lowering costs.
C. Automating the virtual warehouse suspension and resumption settings: This is a direct method to manage credit consumption efficiently. By automatically suspending a warehouse when it is not in use and resuming it when needed, you can avoid consuming credits during periods of inactivity. Snowflake allows warehouses to be configured to automatically suspend after a specified period of inactivity and to automatically resume when a query is submitted that requires the warehouse.
D. Increasing the maximum cluster count parameter for a multi-cluster virtual warehouse would potentially increase credit consumption by allowing more clusters to run simultaneously. It is used to scale up resources for performance, not to reduce costs.
Automating the operational times of virtual warehouses ensures that you only consume compute credits when the warehouse is actively being used for queries, thereby optimizing your Snowflake credit usage.
Snowflake users can create a resource monitor at which levels? (Select TWO).
User level
Pipe level
Account level
Cloud services level
Virtual warehouse level
Resource monitors in Snowflake are tools used to track and control the consumption of compute resources, ensuring that usage stays within defined limits. These monitors can be created at the account level, allowing administrators to set overall resource consumption limits for the entire Snowflake account. Additionally, resource monitors can be set at the virtual warehouse level, enabling more granular control over the resources consumed by individual warehouses. This dual-level capability allows organizations to manage their Snowflake usage efficiently, preventing unexpected costs and optimizing performance.References: Snowflake Documentation on Resource Monitors
Top of Form
Given the statement template below, which database objects can be added to a share?(Select TWO).
GRANT
Secure functions
Stored procedures
Streams
Tables
Tasks
In Snowflake, shares are used to share data across different Snowflake accounts securely. When you create a share, you can include various database objects that you want to share with consumers. According to Snowflake's documentation, the types of objects that can be shared include tables, secure views, secure materialized views, and streams. Secure functions and stored procedures are not shareable objects. Tasks also cannot be shared directly. Therefore, the correct answers are streams (C) and tables (D).
To share a stream or a table, you use the GRANT statement to grant privileges on these objects to a share. The syntax for sharing a table or stream involves specifying the type of object, the object name, and the share to which you are granting access. For example:
GRANT SELECT ON TABLE my_table TO SHARE my_share; GRANT SELECT ON STREAM my_stream TO SHARE my_share;
These commands grant the SELECT privilege on a table named my_table and a stream named my_stream to a share named my_share. This enables the consumer of the share to access these objects according to the granted privileges.
What virtual warehouse configuration should be used when processing a large number of complex queries?
Use the auto-resume feature.
Run the warehouse in auto-scale mode.
Increase the size of the warehouse.
Increase the number of warehouse clusters.
To handle a large number of complex queries, configuring the warehouse in auto-scale mode by increasing the number of warehouse clusters is recommended. This setup allows Snowflake to dynamically add clusters as demand increases, ensuring better performance and concurrency. Increasing the number of clusters provides scalability for concurrent users and heavy workloads, improving response times without impacting individual query performance.
A user needs to MINIMIZE the cost of large tables that are used to store transitory data. The data does not need to be protected against failures, because the data can be reconstructed outside of Snowflake.
What table type should be used?
Permanent
Transient
Temporary
Externa
For minimizing the cost of large tables that are used to store transitory data, which does not need to be protected against failures because it can be reconstructed outside of Snowflake, the best table type to use is Transient. Transient tables in Snowflake are designed for temporary or transitory data storage and offer reduced storage costs compared to permanent tables. However, unlike temporary tables, they persist across sessions until explicitly dropped.
Why Transient Tables: Transient tables provide a cost-effective solution for storing data that is temporary but needs to be available longer than a single session. They have lower data storage costs because Snowflake does not maintain historical data (Time Travel) for as long as it does for permanent tables.
Creating a Transient Table:
To create a transient table, use the TRANSIENT keyword in the CREATE TABLE statement:
CREATE TRANSIENT TABLE my_transient_table (...);
Use Case Considerations: Transient tables are ideal for scenarios where the data is not critical, can be easily recreated, and where cost optimization is a priority. They are suitable for development, testing, or staging environments where data longevity is not a concern.
What are the main differences between the account usage views and the information schema views? (Select TWO).
No active warehouse to needed to query account usage views but one is needed to query information schema views.
Account usage views do not contain data about tables but information schema views do.
Account issue views contain dropped objects but information schema views do not.
Data retention for account usage views is 1 year but is 7 days to 6 months for information schema views, depending on the view.
Information schema views are read-only but account usage views are not.
The account usage views in Snowflake provide historical usage data about the Snowflake account, and they retain this data for a period of up to 1 year. These views include information about dropped objects, enabling audit and tracking activities. On the other hand, information schema views provide metadata about database objects currently in use, such as tables and views, but do not include dropped objects. The retention of data in information schema views varies, but it is generally shorter than the retention for account usage views, ranging from 7 days to a maximum of 6 months, depending on the specific view.References: Snowflake Documentation on Account Usage and Information Schema
What action should be taken if a Snowflake user wants to share a newly created object in a database with consumers?
Use the automatic sharing feature for seamless access.
Drop the object and then re-add it to the database to trigger sharing.
Recreate the object with a different name in the database before sharing.
Use the grant privilege ... TO share command to grant the necessary privileges.
When a Snowflake user wants to share a newly created object in a database with consumers, the correct action to take is to use the GRANT privilege ... TO SHARE command to grant the necessary privileges for the object to be shared. This approach allows the object owner or a user with the appropriate privileges to share database objects such as tables, secure views, and streams with other Snowflake accounts by granting access to a named share.
The GRANT statement specifies which privileges are granted on the object to the share. The object remains in its original location; sharing does not duplicate or move the object. Instead, it allows the specified share to access the object according to the granted privileges.
For example, to share a table, the command would be:
GRANT SELECT ON TABLE new_table TO SHARE consumer_share;
This command grants the SELECT privilege on a table named new_table to a share named consumer_share, enabling the consumers of the share to query the table.
Automatic sharing, dropping and re-adding the object, or recreating the object with a different name are not required or recommended practices for sharing objects in Snowflake. The use of the GRANT statement to a share is the direct and intended method for this purpose.
What are potential impacts of storing non-native values like dates and timestamps in a variant column in Snowflake?
Faster query performance and increased storage consumption
Slower query performance and increased storage consumption
Faster query performance and decreased storage consumption
Slower query performance and decreased storage consumption
Storing non-native values, such as dates and timestamps, in a VARIANT column in Snowflake can lead to slower query performance and increased storage consumption. VARIANT is a semi-structured data type that allows storing JSON, AVRO, ORC, Parquet, or XML data in a single column. When non-native data types are stored as VARIANT, Snowflake must perform implicit conversion to process these values, which can slow down query execution. Additionally, because the VARIANT data type is designed to accommodate a wide variety of data formats, it often requires more storage space compared to storing data in native, strongly-typed columns that are optimized for specific data types.
The performance impact arises from the need to parse and interpret the semi-structured data on the fly during query execution, as opposed to directly accessing and operating on optimally stored data in its native format. Furthermore, the increased storage consumption is a result of the overhead associated with storing data in a format that is less space-efficient than the native formats optimized for specific types of data.
References:
Snowflake Documentation on Semi-Structured Data: Semi-Structured Data
Which security models are used in Snowflake to manage access control? (Select TWO).
Discretionary Access Control (DAC)
Identity Access Management (1AM)
Mandatory Access Control (MAC)
Role-Based Access Control (RBAC)
Security Assertion Markup Language (SAML)
Snowflake uses both Discretionary Access Control (DAC) and Role-Based Access Control (RBAC) to manage access control. DAC allows object owners to grant access privileges to other users. RBAC assigns permissions to roles, and roles are then granted to users, making it easier to manage permissions based on user roles within the organization.
References:
Snowflake Documentation: Access Control in Snowflake
Which system_defined, read-only view display information on column lineage that specifies how data flows from source to target in a SQL write operation?
ACCESS_HISTORY
LOAD_HOSTORY
QUERY_HISTORY
COPY_HISTORY
In Snowflake, the system-defined, read-only view that displays information on column lineage, which specifies how data flows from source to target in a SQL write operation, is ACCESS_HISTORY. This view is instrumental in auditing and analyzing data access patterns, as it provides detailed insights into how and from where the data is being accessed and manipulated within Snowflake.
Reference to Snowflake documentation on ACCESS_HISTORY:
Using Access History to Audit Data Access
A query containing a WHERE clause is running longer than expected. The Query Profile shows that all micro-partitions being scanned How should this query be optimized?
Create a view on the table.
Add a clustering key to the table
Add a limit clause to the query.
Add a Dynamic Data Masking policy to the table.
When a query containing a WHERE clause is running longer than expected, and the Query Profile shows that all micro-partitions are being scanned, the query can be optimized by adding a clustering key to the table.
Understanding Micro-Partitioning in Snowflake:
Snowflake automatically partitions tables into micro-partitions for efficient storage and query performance.
Each micro-partition contains metadata about the range of values it holds, which helps in pruning irrelevant partitions during query execution.
Role of Clustering Keys:
A clustering key defines how data in a table is organized within micro-partitions.
By specifying a clustering key, you can control the physical layout of data, ensuring that related rows are stored together.
This organization improves query performance by reducing the number of micro-partitions that need to be scanned.
Optimizing Queries with Clustering Keys:
Adding a clustering key based on columns frequently used in WHERE clauses helps Snowflake quickly locate and scan relevant micro-partitions.
This minimizes the amount of data scanned and reduces query execution time.
Example:
ALTER TABLE my_table CLUSTER BY (column1, column2);
This command adds a clustering key to my_table using column1 and column2.
Future queries that filter on these columns will benefit from improved performance.
Benefits:
Reduced query execution time: Fewer micro-partitions need to be scanned.
Improved resource utilization: More efficient data retrieval leads to lower compute costs.
References:
Snowflake Documentation: Clustering Keys
Snowflake Documentation: Query Profile
Which command can be used to list all network policies available in an account?
DESCRIBE SESSION POLICY
DESCRIBE NETWORK POLICY
SHOW SESSION POLICIES
SHOW NETWORK POLICIES
To list all network policies available in an account, the correct command is SHOW NETWORK POLICIES. Network policies in Snowflake are used to define and enforce rules for how users can connect to Snowflake, including IP whitelisting and other connection requirements. The SHOW NETWORK POLICIES command provides a list of all network policies defined within the account, along with their details.
The DESCRIBE SESSION POLICY and DESCRIBE NETWORK POLICY commands do not exist in Snowflake SQL syntax. The SHOW SESSION POLICIES command is also incorrect, as it does not pertain to the correct naming convention used by Snowflake for network policy management.
Using SHOW NETWORK POLICIES without any additional parameters will display all network policies in the account, which is useful for administrators to review and manage the security configurations pertaining to network access.
Which actions can be performed using a resource monitor in Snowflake? (Select TWO).
Monitor the performance of individual queries in real-time
Automatically allocate more storage space to a virtual warehouse
Modify the queries being executed within a virtual warehouse.
Suspend a virtual warehouse when its credit usage reaches a defined limit.
Trigger a notification to account administrators when credit usage reaches a specified threshold
Resource monitors in Snowflake can perform actions such as suspending a virtual warehouse when its credit usage reaches a defined limit and triggering a notification to account administrators when credit usage reaches a specified threshold. These actions help manage and control resource usage and costs within Snowflake.
References:
Snowflake Documentation: Resource Monitors
What objects can be cloned within Snowflake? (Select TWO).
Schemas
Users
External tables
Internal named stages
External named stages
In Snowflake, cloning is available for certain types of objects, allowing quick duplication without copying data:
Schemas: These can be cloned, enabling users to replicate entire schema structures, including tables and views, for development or testing.
Internal named stages: These stages, used to store data files within Snowflake, can also be cloned, preserving configurations for data loading.
Users and external objects (like external stages or tables) cannot be cloned due to their dependency on external data and configurations outside Snowflake.
Which Snowflake table type persists until it is explicitly dropped. is available for all users with relevant privileges (across sessions). and has no Fail-safe period?
External
Permanent
Temporary
Transient
The type of Snowflake table that persists until it is explicitly dropped, is available for all users with relevant privileges across sessions, and does not have a Fail-safe period, is a Transient table. Transient tables are designed to provide temporary storage similar to permanent tables but with some reduced storage costs and without the Fail-safe feature, which provides additional data protection for a period beyond the retention time. Transient tables are useful in scenarios where data needs to be temporarily stored for longer than a session but does not require the robust durability guarantees of permanent tables.
Which Snowsight feature can be used to perform data manipulations and transformations using a programming language?
SnowSQL
Dashboards
Python worksheets
Provider Studio
Python worksheets in Snowsight enable users to perform data manipulations and transformations using the Python programming language directly within the Snowflake environment. This feature integrates the power of Python with Snowflake's data warehousing capabilities, allowing for sophisticated data analysis and manipulation.
Introduction to Python Worksheets:
Python worksheets provide an interactive environment to write and execute Python code.
They are designed to facilitate data science and data engineering tasks.
Functionality:
Users can run Python scripts to manipulate data stored in Snowflake.
It allows for leveraging Python's extensive libraries for data analysis, machine learning, and more.
Integration with Snowflake:
Python worksheets run on Snowflake's compute infrastructure, ensuring scalability and performance.
They can access and manipulate Snowflake tables directly, making them a powerful tool for data transformation.
References:
Snowflake Documentation: Snowsight Python Worksheets
Which service or tool is a Command Line Interface (CLI) client used for connecting to Snowflake to execute SQL queries?
Snowsight
SnowCD
Snowpark
SnowSQL
SnowSQL is the Command Line Interface (CLI) client provided by Snowflake for executing SQL queries and performing various tasks. It allows users to connect to their Snowflake accounts and interact with the Snowflake data warehouse.
Installation: SnowSQL can be downloaded and installed on various operating systems.
Configuration: Users need to configure SnowSQL with their Snowflake account credentials.
Usage: Once configured, users can run SQL queries, manage data, and perform administrative tasks through the CLI.
References:
Snowflake Documentation: SnowSQL
Snowflake Documentation: Installing SnowSQL
How does Snowflake handle the data retention period for a table if a stream has not been consumed?
The data retention period is reduced to a minimum of 14 days.
The data retention period is permanently extended for the table.
The data retention period is temporarily extended to the stream's offset.
The data retention period is not affected by the stream consumption.
In Snowflake, the use of streams impacts how the data retention period for a table is handled, particularly in scenarios where the stream has not been consumed. The key point to understand is that Snowflake's streams are designed to capture data manipulation language (DML) changes such as INSERTS, UPDATES, and DELETES that occur on a source table. Streams maintain a record of these changes until they are consumed by a DML operation or a COPY command that references the stream.
When a stream is created on a table and remains unconsumed, Snowflake extends the data retention period of the table to ensure that the changes captured by the stream are preserved. This extension is specifically up to the point in time represented by the stream's offset, which effectively ensures that the data necessary for consuming the stream's contents is retained. This mechanism is in place to prevent data loss and ensure the integrity of the stream's data, facilitating accurate and reliable data processing and analysis based on the captured DML changes.
This behavior emphasizes the importance of managing streams and their consumption appropriately to balance between data retention needs and storage costs. It's also crucial to understand how this temporary extension of the data retention period impacts the overall management of data within Snowflake, including aspects related to data lifecycle, storage cost implications, and the planning of data consumption strategies.
References:
Snowflake Documentation on Streams: Using Streams
Snowflake Documentation on Data Retention: Understanding Data Retention
What does Snowflake recommend for a user assigned the ACCOUNTADMIN role?
The ACCCUKTMKIN role should be set as tie user's default role.
The user should use federated authentication instead of a password
The user should be required to use Multi-Factor Authentication (MFA).
There should be just one user with the ACCOUNTADMIN role in each Snowflake account.
For users assigned the ACCOUNTADMIN role, Snowflake recommends enforcing Multi-Factor Authentication (MFA) to enhance security. The ACCOUNTADMIN role has extensive permissions, making it crucial to secure accounts held by such users against unauthorized access. MFA adds an additional layer of security by requiring a second form of verification beyond just the username and password, significantly reducing the risk of account compromise.References: Snowflake Security Best Practices
What happens to the privileges granted to Snowflake system-defined roles?
The privileges cannot be revoked.
The privileges can be revoked by an ACCOUNTADMIN.
The privileges can be revoked by an orgadmin.
The privileges can be revoked by any user-defined role with appropriate privileges.
The privileges granted to Snowflake's system-defined roles cannot be revoked. System-defined roles, such as SYSADMIN, ACCOUNTADMIN, SECURITYADMIN, and others, come with a set of predefined privileges that are essential for the roles to function correctly within the Snowflake environment. These privileges are intrinsic to the roles and ensure that users assigned these roles can perform the necessary tasks and operations relevant to their responsibilities.
The design of Snowflake's role-based access control (RBAC) model ensures that system-defined roles have a specific set of non-revocable privileges to maintain the security, integrity, and operational efficiency of the Snowflake environment. This approach prevents accidental or intentional modification of privileges that could disrupt the essential functions or compromise the security of the Snowflake account.
References:
Snowflake Documentation on Access Control: Understanding Role-Based Access Control (RBAC)
A clustering key was defined on a table, but It is no longer needed. How can the key be removed?
ALTER TABLE