Exam4Training

Snowflake ADA-C01 SnowPro Advanced Administrator Online Training

Question #1

When a role is dropped, which role inherits ownership of objects owned by the dropped role?

  • A . The SYSADMIN role
  • B . The role above the dropped role in the RBAC hierarchy
  • C . The role executing the command
  • D . The SECURITYADMIN role

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

According to the Snowflake documentation1, when a role is dropped, ownership of all objects owned by the dropped role is transferred to the role that is directly above the dropped role in the role hierarchy. This is to ensure that there is always a single owner for each object in the system.

1: Drop Role | Snowflake Documentation

Question #2

Company A uses Snowflake to manage audio files of call recordings. Company A hired Company B, who also uses Snowflake, to transcribe the audio files for further analysis.

Company A’s Administrator created a share.

What object should be added to the share to allow Company B access to the files?

  • A . A secure view with a column for file URLs.
  • B . A secure view with a column for pre-signed URLs.
  • C . A secure view with a column for METADATA$FILENAME.
  • D . A secure view with a column for the stage name and a column for the file path.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

According to the Snowflake documentation1, pre-signed URLs are required to access external files in a share. A secure view can be used to generate pre-signed URLs for the audio files stored in an external stage and expose them to the consumer account.

Option A is incorrect because file URLs alone are not sufficient to access external files in a share.

Option C is incorrect because METADATA$FILENAME only returns the file name, not the full path or URL.

Option D is incorrect because the stage name and file path are not enough to generate pre-signed URLs.

Question #3

A retailer uses a TRANSACTIONS table (100M rows, 1.2 TB) that has been clustered by the STORE_ID column (varchar (50)). The vast majority of analyses on this table are grouped by STORE_ID to look at store performance.

There are 1000 stores operated by the retailer but most sales come from only 20 stores. The Administrator notes that most queries are currently experiencing poor pruning, with large amounts of bytes processed by even simple queries.

Why is this occurring?

  • A . The STORE_ID should be numeric.
  • B . The table is not big enough to take advantage of the clustering key.
  • C . Sales across stores are not uniformly distributed.
  • D . The cardinality of the stores to transaction count ratio is too low to use the STORE_ID as a clustering key.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

According to the Snowflake documentation1, clustering keys are most effective when the data is evenly distributed across the key values. If the data is skewed, such as in this case where most sales come from only 20 stores out of 1000, then the micro-partitions will not be well-clustered and the pruning will be poor. This means that more bytes will be scanned by queries, even if they filter by STORE_ID.

Option A is incorrect because the data type of the clustering key does not affect the pruning.

Option B is incorrect because the table is large enough to benefit from clustering, if the data was more balanced.

Option D is incorrect because the cardinality of the clustering key is not relevant for pruning, as long as the key values are distinct.

1: Considerations for Choosing Clustering for a Table | Snowflake Documentation

Question #4

A team is provisioning new lower environments from the production database using cloning. All production objects and references reside in the database, and do not have external references.

What set of object references needs to be re-pointed before granting access for usage?

  • A . Sequences, views, and secure views
  • B . Sequences, views, secure views, and materialized views
  • C . Sequences, storage integrations, views, secure views, and materialized views
  • D . There are no object references that need to be re-pointed

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

When cloning is used to provision new lower environments from a production database in Snowflake, all objects (including sequences, views, secure views, and materialized views) will have the same state as in the production. The clone operation creates a fully independent copy of the database objects, meaning all references within the clone will already point to the corresponding objects within the cloned environment and do not need to be re-pointed. No object references need to be re-pointed unless those objects referenced external databases or schemas.

Therefore, if all objects and references reside within the database and do not have external references, no object references need to be re-pointed before granting access for usage.

Question #5

Which function is the role SECURITYADMIN responsible for that is not granted to role USERADMIN?

  • A . Reset a Snowflake user’s password
  • B . Manage system grants
  • C . Create new users
  • D . Create new roles

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

According to the Snowflake documentation1, the SECURITYADMIN role is responsible for managing all grants on objects in the account, including system grants. The USERADMIN role can only create and manage users and roles, but not grant privileges on other objects. Therefore, the function that is unique to the SECURITYADMIN role is to manage system grants.

Option A is incorrect because both roles can reset a user’s password.

Option C is incorrect because both roles can create new users.

Option D is incorrect because both roles can create new roles.

Question #6

An Administrator has a table named SALES_DATA which needs some edits, but the Administrator does not want to change the main table data. The Administrator decides to make a transient copy of this table and wants the transient table to have all the same permissions as the original table.

How can the Administrator create the transient table so it inherits the same permissions as the original table, and what considerations need to be made concerning the requirements? (Select TWO).

  • A . Use the following SQL command:
    create transient table TRANSIENT_SALES_DATA as select * from SALES_DATA;
  • B . Use the following SQL command:
    create transient table TRANSIENT SALES DATA as select * from SALES_DATA copy grants;
  • C . Use the following SQL commands:
    create transient table TRANSIENT_SALES_DATA like SALES_DATA copy grants; insert into TRANSIENT_SALES_DATA select * from SALES_DATA;
  • D . Transient tables will persist until explicitly dropped and contribute to overall storage costs.
  • E . Transient tables will be purged at the end of the user session and do not have any Fail-safe period.

Reveal Solution Hide Solution

Correct Answer: BD
BD

Explanation:

According to the Snowflake documentation1, the COPY GRANTS option can be used to copy all privileges, except OWNERSHIP, from the existing table to the new transient table. This option also preserves any future grants defined for the object type in the schema.

Option A is incorrect because it does not copy any grants from the original table.

Option C is incorrect because it does not copy the data from the original table, only the structure and grants.

Option E is incorrect because transient tables are not session-based and do not have a Fail-safe period, but they do have a Time Travel retention period2.

1: CREATE TABLE | Snowflake Documentation

2: Working with Temporary and Transient Tables | Snowflake Documentation

Question #7

Which actions are considered breaking changes to data that is shared with consumers in the Snowflake Marketplace? (Select TWO).

  • A . Dropping a column from a table
  • B . Deleting data from a table
  • C . Unpublishing the data listing
  • D . Renaming a table
  • E . Adding region availability to the listing

Reveal Solution Hide Solution

Correct Answer: AD
AD

Explanation:

According to the Snowflake documentation1, breaking changes are changes that affect the schema or structure of the shared data, such as dropping or renaming a column or a table. These changes may cause errors or unexpected results for the consumers who query the shared data. Deleting data from a table, unpublishing the data listing, or adding region availability to the listing are not breaking changes, as they do not alter the schema or structure of the shared data.

1: Managing Data Listings in Snowflake Data Marketplace | Snowflake Documentation

Question #8

What are the MINIMUM grants required on the database, schema, and table for a stream to be properly created and managed?

  • A . Database: Usage
    Schema: Usage
    Table: Select, Create Stream
  • B . Database: Usage
    Schema: Usage
    Table: Select
  • C . Database: Usage, Create Stream
    Schema: Usage
    Table: Select
  • D . Database: Usage
    Schema: Usage, Create Stream
    Table: Select

Reveal Solution Hide Solution

Correct Answer: D
Question #9

An Administrator has been asked to support the company’s application team need to build a loyalty program for its customers. The customer table contains Personal Identifiable Information (PII), and the application team’s role is DEVELOPER.

CREATE TABLE customer_data (

customer_first_name string,

customer_last_name string,

customer_address string,

customer_email string,

… some other columns,

);

The application team would like to access the customer data, but the email field must be obfuscated.

How can the Administrator protect the sensitive information, while maintaining the usability of the data?

  • A . Create a view on the customer_data table to eliminate the email column by omitting it from the SELECT clause. Grant the role DEVELOPER access to the view.
  • B . Create a separate table for all the non-Pll columns and grant the role DEVELOPER access to the new table.
  • C . Use the CURRENT_ROLE and CURRENT_USER context functions to integrate with a secure view and filter the sensitive data.
  • D . Use the CURRENT_ROLE context function to integrate with a masking policy on the fields that contain sensitive data.

Reveal Solution Hide Solution

Correct Answer: D
Question #10

An organization’s sales team leverages this Snowflake query a few times a day:

SELECT CUSTOMER ID, CUSTOMER_NAME, ADDRESS, PHONE NO FROM CUSTOMERS WHERE LAST UPDATED BETWEEN TO_DATE (CURRENT_TIMESTAMP) AND (TO_DATE (CURRENT_TIMESTAMP) -7);

What can the Snowflake Administrator do to optimize the use of persisted query results whenever possible?

  • A . Wrap the query in a User-Defined Function (UDF) to match syntax execution.
  • B . Assign everyone on the sales team to the same virtual warehouse.
  • C . Assign everyone on the sales team to the same security role.
  • D . Leverage the CURRENT_DATE function for date calculations.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

According to the web search results from my predefined tool search_web, one of the factors that affects the reuse of persisted query results is the exact match of the query syntax1. If the query contains functions that return different values for successive runs, such as CURRENT_TIMESTAMP, then the query will not match the previous query and will not benefit from the cache. To avoid this, the query should use functions that return consistent values for the same day, such as CURRENT_DATE, which returns the current date without the time component2.

Option A is incorrect because wrapping the query in a UDF does not guarantee the syntax match, as the UDF may also contain dynamic functions.

Option B is incorrect because the virtual warehouse does not affect the persisted query results, which are stored at the account level1.

Option C is incorrect because the security role does not affect the persisted query results, as long as the role has the necessary privileges to access the tables and views used in the query1.

1: Using Persisted Query Results | Snowflake Documentation

2: Date and Time Functions | Snowflake Documentation

Question #11

Which tasks can be performed by the ORGADMIN role? (Select THREE).

  • A . Create one or more accounts in the organization.
  • B . View a list of all regions enabled for the organization.
  • C . Create secure views on application tables within the organization.
  • D . View usage information for all accounts in the organization.
  • E . Perform zero-copy cloning on account data.
  • F . Create a reader account to share data with another organization.

Reveal Solution Hide Solution

Correct Answer: ABD
ABD

Explanation:

A user with the ORGADMIN role can perform the following tasks1:

• Create one or more accounts in the organization.

• View a list of all regions enabled for the organization.

• View usage information for all accounts in the organization.

Option C is incorrect because creating secure views on application tables is not a function of the ORGADMIN role, but rather a function of the roles that have access to the tables and schemas within the accounts.

Option E is incorrect because performing zero-copy cloning on account data is not a function of the ORGADMIN role, but rather a function of the roles that have the CLONE privilege on the objects within the accounts.

Option F is incorrect because creating a reader account to share data with another organization is not a function of the ORGADMIN role, but rather a function of the roles that have the CREATE SHARE privilege on the objects within the accounts.

Question #12

What role or roles should be used to properly create the object required to setup OAuth 2.0 integration?

  • A . Any role with GRANT USAGE on SECURITY INTEGRATION
  • B . ACCOUNTADMIN and SYSADMIN
  • C . ACCOUNTADMIN and SECURITYADMIN
  • D . ACCOUNTADMIN only

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

In Snowflake, setting up OAuth 2.0 integration typically requires two roles: ACCOUNTADMIN and SECURITYADMIN. The ACCOUNTADMIN role has the authority to create and manage the entire account, including security integrations. The SECURITYADMIN role is typically responsible for managing configurations related to security and authentication, including OAuth integrations.

Option A (Any role with GRANT USAGE on SECURITY INTEGRATION) might not be sufficient to create the necessary security integration objects. Option B (ACCOUNTADMIN and SYSADMIN) includes the SYSADMIN role, which is not necessary for setting up OAuth integrations. Option D (ACCOUNTADMIN only) might not encompass all permissions needed to create and manage security integrations, even though ACCOUNTADMIN is one of the highest privileged roles.

Therefore, the correct answer is that both ACCOUNTADMIN and SECURITYADMIN roles are required to properly create the objects needed for setting up OAuth 2.0 integration.

Question #13

The following SQL command was executed:

Use role SECURITYADMIN;

Grant ownership

On future tables

In schema PROD. WORKING

To role PROD_WORKING_OWNER;

Grant role PROD_WORKING_OWNER to role SYSADMIN;

Use role ACCOUNTADMIN;

Create table PROD.WORKING.XYZ (value number);

Which role(s) can alter or drop table XYZ?

  • A . Because ACCOUNTADMIN created the table, only the ACCOUNTADMIN role can alter or drop table XYZ.
  • B . SECURITYADMIN, SYSADMIN, and ACCOUNTADMIN can alter or drop table XYZ.
  • C . PROD_WORKING_OWNER, ACCOUNTADMIN, and SYSADMIN can alter or drop table XYZ.
  • D . Only the PROD_WORKING_OWNER role can alter or drop table XYZ.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

According to the GRANT OWNERSHIP documentation, the ownership privilege grants full control over the table and can only be held by one role at a time. However, the current owner can also grant the ownership privilege to another role, which transfers the ownership to the new role. In this case, the SECURITYADMIN role granted the ownership privilege on future tables in the PROD.WORKING schema to the PROD_WORKING_OWNER role. This means that any table created in that schema after the grant statement will be owned by the PROD_WORKING_OWNER role. Therefore, the PROD_WORKING_OWNER role can alter or drop table XYZ, which was created by the ACCOUNTADMIN role in the PROD.WORKING schema. Additionally, the ACCOUNTADMIN role can also alter or drop table XYZ, because it is the top-level role that has all privileges on all objects in the account. Furthermore, the SYSADMIN role can also alter or drop table XYZ, because it was granted the PROD_WORKING_OWNER role by the SECURITYADMIN role. The SYSADMIN role can activate the PROD_WORKING_OWNER role and inherit its privileges, including the ownership privilege on table XYZ. The SECURITYADMIN role cannot alter or drop table XYZ, because it does not have the ownership privilege on the table, nor does it have the PROD_WORKING_OWNER role.

Question #14

When adding secure views to a share in Snowflake, which function is needed to authorize users from another account to access rows in a base table?

  • A . CURRENT_ROLE
  • B . CURRENT ACCOUNT
  • C . CURRENT_USER
  • D . CURRENT_CLIENT

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

According to the Working with Secure Views documentation, secure views are designed to limit access to sensitive data that should not be exposed to all users of the underlying table(s). When sharing secure views with another account, the view definition must include a function that returns the identity of the user who is querying the view, such as CURRENT_USER, CURRENT_ROLE, or CURRENT_ACCOUNT. These functions can be used to filter the rows in the base table based on the user’s identity. For example, a secure view can use the CURRENT_USER function to compare the user name with a column in the base table that contains the authorized user names. Only the rows that match the user name will be returned by the view. The CURRENT_CLIENT function is not suitable for this purpose, because it returns the IP address of the client that is connected to Snowflake, which is not related to the user’s identity.

Question #15

In which scenario will use of an external table simplify a data pipeline?

  • A . When accessing a Snowflake table from a relational database
  • B . When accessing a Snowflake table from an external database within the same region
  • C . When continuously writing data from a Snowflake table to external storage
  • D . When accessing a Snowflake table that references data files located in cloud storage

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

According to the Introduction to External Tables documentation, an external table is a Snowflake feature that allows you to query data stored in an external stage as if the data were inside a table in Snowflake. The external stage is not part of Snowflake, so Snowflake does not store or manage the stage. This simplifies the data pipeline by eliminating the need to load the data into Snowflake before querying it. External tables can access data stored in any format that the COPY INTO <table> command supports, such as CSV, JSON, AVRO, ORC, or PARQUET. The other scenarios do not involve external tables, but rather require data loading, unloading, or federation.

Question #16

A Snowflake user runs a complex SQL query on a dedicated virtual warehouse that reads a large amount of data from micro-partitions. The same user wants to run another query that uses the same data set.

Which action would provide optimal performance for the second SQL query?

  • A . Assign additional clusters to the virtual warehouse.
  • B . Increase the STATEMENT_TIMEOUT_IN_SECONDS parameter in the session.
  • C . Prevent the virtual warehouse from suspending between the running of the first and second queries.
  • D . Use the RESULT_SCAN function to post-process the output of the first query.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

When a user runs a complex SQL query on a dedicated virtual warehouse involving a large amount of data read from micro-partitions, the best action for optimal performance of a second query is to prevent the warehouse from suspending between the two queries. This ensures that the data cache (such as the result set cache and local disk cache) remains "warm," providing faster performance for the second query because it may be able to utilize these caches instead of reading the data from the micro-partitions again.

Option A (Assign additional clusters to the virtual warehouse) might not directly impact the performance of the second query unless the current size of the warehouse is already insufficient for the parallel processing needs. Option B (Increase the STATEMENT_TIMEOUT_IN_SECONDS parameter in the session) does not improve query performance; it simply increases the maximum time a query can run. Option D (Use the RESULT_SCAN function to post-process the output of the first query) is for accessing the cached results of a previous query but may not apply in this scenario, as it is for processing the exact same result set, not a new query based on the same data set.

Question #17

For Snowflake network policies, what will occur when the account_level and user_level network policies are both defined?

  • A . The account_level policy will override the user_level policy.
  • B . The user_level policy will override the account_level policy.
  • C . The user_level network policies will not be supported.
  • D . A network policy error will be generated with no definitions provided.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

According to the Network Policies documentation, a network policy can be applied to an account, a security integration, or a user. If there are network policies applied to more than one of these, the most specific network policy overrides more general network policies.

The following summarizes the order of precedence:

• Account: Network policies applied to an account are the most general network policies. They are overridden by network policies applied to a security integration or user.

• Security Integration: Network policies applied to a security integration override network policies applied to the account, but are overridden by a network policy applied to a user.

• User: Network policies applied to a user are the most specific network policies. They override both accounts and security integrations.

Therefore, if both the account_level and user_level network policies are defined, the user_level policy will take effect and the account_level policy will be ignored. The other options are incorrect because:

• The account_level policy will not override the user_level policy, as explained above.

• The user_level network policies will be supported, as they are part of the network policy feature.

• A network policy error will not be generated, as there is no conflict between the account_level and user_level network policies.

Question #18

MY_TABLE is a table that has not been updated or modified for several days. On 01 January 2021 at 07:01, a user executed a query to update this table.

The query ID is ‘8e5d0ca9-005e-44e6-b858-a8f5b37c5726’. It is now 07:30 on the same day.

Which queries will allow the user to view the historical data that was in the table before this query was executed? (Select THREE).

  • A . SELECT * FROM my table WITH TIME_TRAVEL (OFFSET => -60*30);
  • B . SELECT * FROM my_table AT (TIMESTAMP => ‘2021-01-01 07:00:00’ :: timestamp);
  • C . SELECT * FROM TIME_TRAVEL (‘MY_TABLE’, 2021-01-01 07:00:00);
  • D . SELECT * FROM my table PRIOR TO STATEMENT ‘8e5d0ca9-005e-44e6-b858-a8f5b37c5726’;
  • E . SELECT * FROM my_table AT (OFFSET => -60*30);
  • F . SELECT * FROM my_table BEFORE (STATEMENT => ‘8e5d0ca9-005e-44e6-b858-a8f5b37c5726’);

Reveal Solution Hide Solution

Correct Answer: BDF
BDF

Explanation:

According to the AT | BEFORE documentation, the AT or BEFORE clause is used for Snowflake Time Travel, which allows you to query historical data from a table based on a specific point in the past.

The clause can use one of the following parameters to pinpoint the exact historical data you wish to access:

• TIMESTAMP: Specifies an exact date and time to use for Time Travel.

• OFFSET: Specifies the difference in seconds from the current time to use for Time Travel.

• STATEMENT: Specifies the query ID of a statement to use as the reference point for Time

Travel.

Therefore, the queries that will allow the user to view the historical data that was in the table before the query was executed are:

• B. SELECT * FROM my_table AT (TIMESTAMP => ‘2021-01-01 07:00:00’ :: timestamp); This query uses the TIMESTAMP parameter to specify a point in time that is before the query execution time of 07:01.

• D. SELECT * FROM my table PRIOR TO STATEMENT ‘8e5d0ca9-005e-44e6-b858-a8f5b37c5726’; This query uses the PRIOR TO STATEMENT keyword and the STATEMENT parameter to specify a point in time that is immediately preceding the query execution time of 07:01.

• F. SELECT * FROM my_table BEFORE (STATEMENT => ‘8e5d0ca9-005e-44e6-b858-a8f5b37c5726’); This query uses the BEFORE keyword and the STATEMENT parameter to specify a point in time that is immediately preceding the query execution time of 07:01.

The other queries are incorrect because:

• A. SELECT * FROM my table WITH TIME_TRAVEL (OFFSET => -60*30); This query uses the OFFSET parameter to specify a point in time that is 30 minutes before the current time, which is 07:30. This is after the query execution time of 07:01, so it will not show the historical data before the query was executed.

• C. SELECT * FROM TIME_TRAVEL (‘MY_TABLE’, 2021-01-01 07:00:00); This query is not valid syntax for Time Travel. The TIME_TRAVEL function does not exist in Snowflake. The correct syntax is to use the AT or BEFORE clause after the table name in the FROM clause.

• E. SELECT * FROM my_table AT (OFFSET => -60*30); This query uses the AT keyword and the OFFSET parameter to specify a point in time that is 30 minutes before the current time, which is 07:30. This is equal to the query execution time of 07:01, so it will not show the historical data before the query was executed. The AT keyword specifies that the request is inclusive of any changes made by a statement or transaction with timestamp equal to the specified parameter. To exclude the changes made by the query, the BEFORE keyword should be used instead.

Question #19

What are characteristics of Dynamic Data Masking? (Select TWO).

  • A . A masking policy that is currently set on a table can be dropped.
  • B . A single masking policy can be applied to columns in different tables.
  • C . A masking policy can be applied to the VALUE column of an external table.
  • D . The role that creates the masking policy will always see unmasked data in query results.
  • E . A single masking policy can be applied to columns with different data types.

Reveal Solution Hide Solution

Correct Answer: BE
BE

Explanation:

According to the Using Dynamic Data Masking documentation, Dynamic Data Masking is a feature that allows you to alter sections of data in table and view columns at query time using a predefined masking strategy.

The following are some of the characteristics of Dynamic Data Masking:

• A single masking policy can be applied to columns in different tables. This means that you can write a policy once and have it apply to thousands of columns across databases and schemas.

• A single masking policy can be applied to columns with different data types. This means that you can use the same masking strategy for columns that store different kinds of data, such as strings, numbers, dates, etc.

• A masking policy that is currently set on a table can be dropped. This means that you can remove the masking policy from the table and restore the original data visibility.

• A masking policy can be applied to the VALUE column of an external table. This means that you can mask data that is stored in an external stage and queried through an external table.

• The role that creates the masking policy will always see unmasked data in query results. This is not true, as the masking policy can also apply to the creator role depending on the execution context conditions defined in the policy. For example, if the policy specifies that only users with a certain custom entitlement can see the unmasked data, then the creator role will also need to have that entitlement to see the unmasked data.

Question #20

A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous loT data. Time Travel is being used as a component of the company’s data quality process in which the ingestion pipeline should revert to a known quality data state if any anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.

According to best practices, how should these requirements be met? (Select TWO).

  • A . Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas.
  • B . The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_ DAYS.
  • C . The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas).
  • D . Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables.
  • E . The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data.

Reveal Solution Hide Solution

Correct Answer: BE
BE

Explanation:

According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days.

To meet the requirements of the scenario, the following best practices should be followed:

• The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.

• The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.

The other options are incorrect because:

• Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.

• The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.

• Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.

Question #20

A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous loT data. Time Travel is being used as a component of the company’s data quality process in which the ingestion pipeline should revert to a known quality data state if any anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.

According to best practices, how should these requirements be met? (Select TWO).

  • A . Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas.
  • B . The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_ DAYS.
  • C . The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas).
  • D . Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables.
  • E . The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data.

Reveal Solution Hide Solution

Correct Answer: BE
BE

Explanation:

According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days.

To meet the requirements of the scenario, the following best practices should be followed:

• The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.

• The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.

The other options are incorrect because:

• Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.

• The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.

• Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.

Question #20

A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous loT data. Time Travel is being used as a component of the company’s data quality process in which the ingestion pipeline should revert to a known quality data state if any anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.

According to best practices, how should these requirements be met? (Select TWO).

  • A . Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas.
  • B . The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_ DAYS.
  • C . The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas).
  • D . Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables.
  • E . The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data.

Reveal Solution Hide Solution

Correct Answer: BE
BE

Explanation:

According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days.

To meet the requirements of the scenario, the following best practices should be followed:

• The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.

• The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.

The other options are incorrect because:

• Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.

• The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.

• Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.

Question #20

A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous loT data. Time Travel is being used as a component of the company’s data quality process in which the ingestion pipeline should revert to a known quality data state if any anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.

According to best practices, how should these requirements be met? (Select TWO).

  • A . Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas.
  • B . The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_ DAYS.
  • C . The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas).
  • D . Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables.
  • E . The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data.

Reveal Solution Hide Solution

Correct Answer: BE
BE

Explanation:

According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days.

To meet the requirements of the scenario, the following best practices should be followed:

• The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.

• The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.

The other options are incorrect because:

• Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.

• The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.

• Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.

Question #20

A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous loT data. Time Travel is being used as a component of the company’s data quality process in which the ingestion pipeline should revert to a known quality data state if any anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.

According to best practices, how should these requirements be met? (Select TWO).

  • A . Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas.
  • B . The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_ DAYS.
  • C . The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas).
  • D . Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables.
  • E . The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data.

Reveal Solution Hide Solution

Correct Answer: BE
BE

Explanation:

According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days.

To meet the requirements of the scenario, the following best practices should be followed:

• The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.

• The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.

The other options are incorrect because:

• Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.

• The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.

• Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.

Question #25

INSERT INTO VWH META SELECT CURRENT TIMESTAMP (), * FROM TABLE (RESULT_SCAN (SELECT LAST QUERY ID(-1)));

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

According to the Using Persisted Query Results documentation, the RESULT_SCAN function allows you to query the result set of a previous command as if it were a table. The LAST_QUERY_ID function returns the query ID of the most recent statement executed in the current session. Therefore, the combination of these two functions can be used to access the output of the SHOW WAREHOUSES command, which returns the configurations of all the virtual warehouses in the account. However, to persist the warehouse data in JSON format in the table VWH_META, the OBJECT_CONSTRUCT function is needed to convert the output of the SHOW WAREHOUSES command into a VARIANT column. The OBJECT_CONSTRUCT function takes a list of key-value pairs and returns a single JSON object.

Therefore, the correct commands to execute are:

Question #25

INSERT INTO VWH META SELECT CURRENT TIMESTAMP (), * FROM TABLE (RESULT_SCAN (SELECT LAST QUERY ID(-1)));

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

According to the Using Persisted Query Results documentation, the RESULT_SCAN function allows you to query the result set of a previous command as if it were a table. The LAST_QUERY_ID function returns the query ID of the most recent statement executed in the current session. Therefore, the combination of these two functions can be used to access the output of the SHOW WAREHOUSES command, which returns the configurations of all the virtual warehouses in the account. However, to persist the warehouse data in JSON format in the table VWH_META, the OBJECT_CONSTRUCT function is needed to convert the output of the SHOW WAREHOUSES command into a VARIANT column. The OBJECT_CONSTRUCT function takes a list of key-value pairs and returns a single JSON object.

Therefore, the correct commands to execute are:

Question #25

INSERT INTO VWH META SELECT CURRENT TIMESTAMP (), * FROM TABLE (RESULT_SCAN (SELECT LAST QUERY ID(-1)));

Reveal Solution Hide Solution

Correct Answer: C

Explanation:

According to the Using Persisted Query Results documentation, the RESULT_SCAN function allows you to query the result set of a previous command as if it were a table. The LAST_QUERY_ID function returns the query ID of the most recent statement executed in the current session. Therefore, the combination of these two functions can be used to access the output of the SHOW WAREHOUSES command, which returns the configurations of all the virtual warehouses in the account. However, to persist the warehouse data in JSON format in the table VWH_META, the OBJECT_CONSTRUCT function is needed to convert the output of the SHOW WAREHOUSES command into a VARIANT column. The OBJECT_CONSTRUCT function takes a list of key-value pairs and returns a single JSON object.

Therefore, the correct commands to execute are:

Question #28

What are the requirements when creating a new account within an organization in Snowflake? (Select TWO).

  • A . The account requires at least one ORGADMIN role within one of the organization’s accounts.
  • B . The account name is immutable and cannot be changed.
  • C . The account name must be specified when the account is created.
  • D . The account name must be unique among all Snowflake customers.
  • E . The account name must be unique within the organization.

Reveal Solution Hide Solution

Correct Answer: CE
CE

Explanation:

According to the CREATE ACCOUNT documentation, the account name must be specified when the account is created, and it must be unique within an organization, regardless of which Snowflake Region the account is in.

The other options are incorrect because:

• The account does not require at least one ORGADMIN role within one of the organization’s accounts. The account can be created by an organization administrator (i.e. a user with the ORGADMIN role) through the web interface or using SQL, but the new account does not inherit the ORGADMIN role from the existing account. The new account will have its own set of users, roles, databases, and warehouses.

• The account name is not immutable and can be changed. The account name can be modified by contacting Snowflake Support and requesting a name change. However, changing the account name may affect some features that depend on the account name, such as SSO or SCIM.

• The account name does not need to be unique among all Snowflake customers. The account name only needs to be unique within the organization, as the account URL also includes the region and cloud platform information. For example, two accounts with the same name can exist in different regions or cloud platforms, such as myaccount.us-east-1.snowflakecomputing.com and myaccount.eu-west-1.aws.snowflakecomputing.com.

Question #29

A Snowflake customer is experiencing higher costs than anticipated while migrating their data warehouse workloads from on-premises to Snowflake. The migration workloads have been deployed on a single warehouse and are characterized by a large number of small INSERTs rather than bulk loading of large extracts. That single warehouse has been configured as a single cluster, 2XL because there are many parallel INSERTs that are scheduled during nightly loads.

How can the Administrator reduce the costs, while minimizing the overall load times, for migrating data warehouse history?

  • A . There should be another 2XL warehouse deployed to handle a portion of the load queries.
  • B . The 2XL warehouse should be changed to 4XL to increase the number of threads available for parallel load queries.
  • C . The warehouse should be kept as a SMALL or XSMALL and configured as a multi-cluster warehouse to handle the parallel load queries.
  • D . The INSERTS should be converted to several tables to avoid contention on large tables that slows down query processing.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

According to the Snowflake Warehouse Cost Optimization blog post, one of the strategies to reduce the cost of running a warehouse is to use a multi-cluster warehouse with auto-scaling enabled. This allows the warehouse to automatically adjust the number of clusters based on the concurrency demand and the queue size. A multi-cluster warehouse can also be configured with a minimum and maximum number of clusters, as well as a scaling policy to control the scaling behavior. This way, the warehouse can handle the parallel load queries efficiently without wasting resources or credits. The blog post also suggests using a smaller warehouse size, such as SMALL or XSMALL, for loading data, as it can perform better than a larger warehouse size for small INSERTs. Therefore, the best option to reduce the costs while minimizing the overall load times for migrating data warehouse history is to keep the warehouse as a SMALL or XSMALL and configure it as a multi-cluster warehouse to handle the parallel load queries.

The other options are incorrect because:

• A. Deploying another 2XL warehouse to handle a portion of the load queries will not reduce the costs, but increase them. It will also introduce complexity and potential inconsistency in managing the data loading process across multiple warehouses.

• B. Changing the 2XL warehouse to 4XL will not reduce the costs, but increase them. It will also provide more compute resources than needed for small INSERTs, which are not CPU-intensive but I/O-intensive.

• D. Converting the INSERTs to several tables will not reduce the costs, but increase them. It will also create unnecessary data duplication and fragmentation, which will affect the query

performance and data quality.

Question #30

What roles or security privileges will allow a consumer account to request and get data from the Data Exchange? (Select TWO).

  • A . SYSADMIN
  • B . SECURITYADMIN
  • C . ACCOUNTADMIN
  • D . IMPORT SHARE and CREATE DATABASE
  • E . IMPORT PRIVILEGES and SHARED DATABASE

Reveal Solution Hide Solution

Correct Answer: CD
CD

Explanation:

According to the Accessing a Data Exchange documentation, a consumer account can request and get data from the Data Exchange using either the ACCOUNTADMIN role or a role with the IMPORT SHARE and CREATE DATABASE privileges. The ACCOUNTADMIN role is the top-level role that has all privileges on all objects in the account, including the ability to request and get data from the Data Exchange. A role with the IMPORT SHARE and CREATE DATABASE privileges can also request and get data from the Data Exchange, as these are the minimum privileges required to create a database from a share.

The other options are incorrect because:

• A. The SYSADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SYSADMIN role is a pre-defined role that has all privileges on all objects in the account, except for the privileges reserved for the ACCOUNTADMIN role, such as managing users, roles, and shares.

• B. The SECURITYADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SECURITYADMIN role is a pre-defined role that has the privilege to manage security objects in the account, such as network policies, encryption keys, and security integrations, but not data objects, such as databases, schemas, and tables.

• E. The IMPORT PRIVILEGES and SHARED DATABASE are not valid privileges in Snowflake. The correct privilege names are IMPORT SHARE and CREATE DATABASE, as explained above.

Question #31

An Administrator wants to delegate the administration of a company’s data exchange to users who do not have access to the ACCOUNTADMIN role.

How can this requirement be met?

  • A . Grant imported privileges on data exchange EXCHANGE_NAME to ROLE_NAME;
  • B . Grant modify on data exchange EXCHANGE_NAME to ROLE_NAME;
  • C . Grant ownership on data exchange EXCHANGE_NAME to ROLE NAME;
  • D . Grant usage on data exchange EXCHANGE_NAME to ROLE_NAME;

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In Snowflake, if an administrator wants to delegate the administration of the company’s data exchange to users who do not have the ACCOUNTADMIN role, they can grant those users USAGE permission on the data exchange. This allows the specified role to view the data exchange and request data without granting them the ability to modify or own it. This meets the requirement to delegate administration while maintaining appropriate separation of privileges.

Options A (Grant imported privileges on data exchange) and C (Grant ownership on data exchange) provide more permissions than necessary and may not be appropriate just for managing the data exchange. Option B (Grant modify on data exchange) might also provide unnecessary additional permissions, depending on what administrative tasks need to be performed. Typically, granting USAGE permission is the minimum necessary to meet basic administrative requirements.

Exit mobile version