Mock Test 1 Flashcards

1
Q

Tables and views protected by row access policies cannot be used with the Search Optimization Search

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Shares are named Snowflake objects that encapsulate all of the information required to share a database. Each share consists of:

The privileges that grant access to the database(s) and the schema containing the objects to share.

The privileges that grant access to the specific objects in the database.

The consumer accounts with which the database and its objects are shared.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

if you have an EMPLOYEE table - you can run any of these queries to find the depth - SELECT SYSTEM$CLUSTERING_INFORMATION(‘EMPLOYEE’); SELECT SYSTEM$CLUSTERING_DEPTH(‘EMPLOYEE’);

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Pattern matching using a regular expression is generally the slowest of the three options for identifying/specifying data files to load from a stage; however, this option works well if you exported your files in named order from your external application and want to batch load the files in the same order

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The SHOW PARAMETERS command determines whether a network policy is set on the account or for a specific user.

For Account level: SHOW PARAMETERS LIKE ‘network_policy’ IN ACCOUNT;

For User level : SHOW PARAMETERS LIKE ‘network_policy’ IN USER <username>;</username>

Example - SHOW PARAMETERS LIKE ‘network_policy’ IN USER john;

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A user’s default role is the role a user gets set to each time the user logs in to Snowflake. Snowflake uses roles to control the objects (virtual warehouses, databases, tables, etc.) that users can access:

Snowflake provides a set of predefined roles, as well as a framework for defining a hierarchy of custom roles.

All Snowflake users are automatically assigned the predefined PUBLIC role, which enables login to Snowflake and basic object access.

In addition to the PUBLIC role, each user can be assigned additional roles, with one of these roles designated as their default role.

A user’s default role determines the role used in the Snowflake sessions initiated by the user; however, this is only a default. Users can change roles within a session at any time.

Roles can be assigned at user creation or afterward.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Secure Data Sharing enables sharing selected objects in a database in your account with other Snowflake accounts. The following Snowflake database objects can be shared:

Tables

External tables

Secure views

Secure materialized views

Secure UDFs

Snowflake enables the sharing of databases through shares created by data providers and “imported” by data consumers.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

After a specified period of time (defined by the IdP), a user’s session in the IdP automatically times out, but this does not affect their Snowflake sessions. Any Snowflake sessions that are active at the time remain open and do not require re-authentication. However, to initiate any new Snowflake sessions, the user must log into the IdP again.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When Snowflake warehouse cannot fit an operation in memory, it starts spilling (storing) data first to the local disk of a warehouse node and then to remote storage. In such a case, Snowflake first tries to temporarily store the data on the warehouse’s local disk. As this means extra IO operations, any query that requires spilling will take longer than a similar query running on similar data that is capable to fit the operations in memory. Also, if the local disk is insufficient to fit the spilled data, Snowflake further tries to write to the remote cloud storage, which will be shown in the query profile as “Bytes spilled to remote storage”.

The spilling can’t always be avoided, especially for large batches of data, but it can be decreased by:

Reducing the amount of data processed. For example, by trying to improve partition pruning or projecting only the columns that are needed in the output.

Decreasing the number of parallel queries running in the warehouse.

Trying to split the processing into several steps (for example, by replacing the CTEs with temporary tables).

Using a larger warehouse - effectively means more memory and more local disk space.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

AUTO_REFRESH_REGISTRATION_HISTORY table function can be used to query the history of data files registered in the metadata of specified objects and the credits billed for these operations. The table function returns the billing history within a specified date range for your entire Snowflake account. This function returns billing activity within the last 14 days.

Please note, STAGE_DIRECTORY_FILE_REGISTRATION_HISTORY table function can be used to query information about the metadata history for a directory table, including:

Files added or removed automatically as part of a metadata refresh.

Any errors found when refreshing the metadata.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

GET_ABSOLUTE_PATH returns the absolute path of a staged file using the stage name and path of the file relative to its location in the stage as inputs.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

User-managed Tasks is recommended when you can fully utilize a single warehouse by scheduling multiple concurrent tasks to take advantage of available compute resources. Also, recommended when adherence to the schedule interval is less critical. Serverless Tasks is recommended when you cannot fully utilize a warehouse because too few tasks run concurrently or they run to completion quickly (in less than 1 minute). Also, recommended when adherence to the schedule interval is critical.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Users with the ACCOUNTADMIN role can view the billing for Automatic Clustering using Snowsight, the classic web interface, or SQL: Snowsight: Select Admin » Usage. Classic Web Interface: Click on Account tab » Billing & Usage

The billing for Automatic Clustering shows up as a separate Snowflake-provided warehouse named AUTOMATIC_CLUSTERING.

SQL:Query either of the following: AUTOMATIC_CLUSTERING_HISTORY table function (in the Snowflake Information Schema). AUTOMATIC_CLUSTERING_HISTORY View (in Account Usage).

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Clustering keys are not for every table. Tables in the multi-terabyte range are good candidates for clustering keys. Both automatic clustering and reclustering consume credit. A single clustering key can contain one or more columns or expressions. Snowflake recommends a maximum of three or four columns (or expressions) per key for most tables. Adding more than 3-4 columns tends to increase costs more than benefits.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

VALIDATION_MODE instructs the COPY command to validate the data files instead of loading them into the specified table; i.e., the COPY command tests the files for errors but does not load them.

The command validates the data to be loaded and returns results based on the validation option specified:

Syntax: VALIDATION_MODE = RETURN_n_ROWS | RETURN_ERRORS | RETURN_ALL_ERRORS

RETURN_n_ROWS (e.g. RETURN_10_ROWS) - Validates the specified number of rows, if no errors are encountered; otherwise, fails at the first error encountered in the rows.

RETURN_ERRORS - Returns all errors (parsing, conversion, etc.) across all files specified in the COPY statement.

RETURN_ALL_ERRORS - Returns all errors across all files specified in the COPY statement, including files with errors that were partially loaded during an earlier load because the ON_ERROR copy option was set to CONTINUE during the load.”

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The expiration period of Scoped URL: The URL expires when the persisted query result period ends.

The expiration period of the File URL: It is permanent.

The expiration period of Pre-Signed URL: Length of time specified in the expiration_time argument.

A
17
Q

File URL: URL that identifies the database, schema, stage, and file path to a set of files. A role that has sufficient privileges on the stage can access the files. Ideal for custom applications that require access to unstructured data files.

Scoped URL: Encoded URL that permits temporary access to a staged file without granting privileges to the stage. The URL expires when the persisted query result period ends (i.e., the results cache expires), which is currently 24 hours. Ideal for use in custom applications, providing unstructured data to other accounts via a share, or for downloading and ad hoc analysis of unstructured data via Snowsight.

Pre-signed URL: Simple HTTPS URL used to access a file via a web browser. A file is temporarily accessible to users via this URL using a pre-signed access token. The expiration time for the access token is configurable. Ideal for business intelligence applications or reporting tools that need to display unstructured file contents.

A
18
Q

INFORMATION_SCHEMA can be used to find out about storage, compute, and objects in a Snowflake account. Every database that you create on Snowflake has a schema called INFORMATION_SCHEMA that’s automatically created, and inside that schema, you can find views and table functions that provide metadata information about objects in your account.

A
19
Q

Time Travel cannot be disabled for an account. A user with the ACCOUNTADMIN role can set DATA_RETENTION_TIME_IN_DAYS to 0 at the account level, which means that all databases (and subsequently all schemas and tables) created in the account have no retention period by default; however, this default can be overridden at any time for any database, schema, or table.

A
20
Q

The cloud services layer is a collection of services that coordinate activities across Snowflake. These services tie together all of the different components of Snowflake in order to process user requests, from login to query dispatch.

The cloud service layer manages Authentication, Infrastructure Management, Metadata Management, Query parsing and optimization, and Access control services.

A