Following objects can be cloned in snowflake

Following objects can be cloned in snowflake
A . Permanent table
B. Transient table
C. Temporary table
D. External tables
E. Internal stages

Answer: A,B,C

Explanation:

For tables, Snowflake supports cloning permanent and transient tables; temporary tables can be cloned only to a temporary table or a transient table.

For databases and schemas, cloning is recursive:

Cloning a database clones all the schemas and other objects in the database.

Cloning a schema clones all the contained objects in the schema.

However, the following object types are not cloned:

External tables

Internal (Snowflake) stages

Which command can be run to list all shares that have been created in your account or are available to consume by your account

Which command can be run to list all shares that have been created in your account or are available to consume by your account
A . SHOW SHARES
B. LIST SHARES
C. DESCRIBE SHARES

Answer: A

Explanation:

SHOW SHARES

Lists all shares available in the system:

Outbound shares (to consumers) that have been created in your account (as a provider). Inbound shares (from providers) that are available for your account to consume. https://docs.snowflake.com/en/sql-reference/sql/show-shares.html#show-shares

COMPRESSION = AUTO can automatically detect below compression techniques when FORMAT TYPE is CSV

COMPRESSION = AUTO can automatically detect below compression techniques when FORMAT TYPE is CSV
A . GZIP
B. BZ2
C. BROTLI
D. ZSTD
E. DEFLATE
F. RAW_DEFLATE

Answer: A,B,D,E,F

Explanation:

AUTO

Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO. https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#type-csv

You need to choose a high cardinality column for the clustering key

You need to choose a high cardinality column for the clustering key
A . TRUE
B. FALSE

Answer: B

Explanation:

A column with very low cardinality (e.g. a column that indicates only whether a person is male or female) might yield only minimal pruning. At the other extreme, a column with very high cardinality (e.g. a column containing UUID or nanosecond timestamp values) is also typically not a good candidate to use as a clustering key directly.

Which alter command below may affect the availability of column with respect to time travel?

Which alter command below may affect the availability of column with respect to time travel?
A . ALTER TABLE…DROP COLUMN
B. ALTER TABLE…SET DATA TYPE
C. ALTER TABLE…SET DEFAULT

Answer: B

Explanation:

If the precision of a column is decreased below the maximum precision of any column data retained in Time Travel, you will not be able to restore the table without first increasing the precision. The precision of a column data can only be altered using the ALTER TABLE …SET DATA TYPE command.

Hence, ALTER TABLE…SET DATA TYPE is the most appropriate answer https://docs.snowflake.com/en/sql-reference/sql/alter-table-column.html#alter-table-alter-column

What datatype Snowflake will use for EMPLOYEE_ID?

You have created a table as below

CREATE TABLE EMPLOYEE(EMPLOYEE_ID NUMBER, EMPLOYEE_NAME VARCHAR);

What datatype Snowflake will use for EMPLOYEE_ID?
A . FIXED
B. INTEGER
C. NUMBER

Answer: A

Explanation:

Please try this for yourself. Note that this advanced certification requires working experience, so some of

these hands-on will help in getting that experience.

Instructions for hands-on

Which are those?

Every Snowflake table loaded by the Kafka connector has a schema consisting of two VARIANT columns.

Which are those?
A . RECORD_CONTENT
B. RECORD_METADATA
C. RECORD_MESSAGE

Answer: A,B

Explanation:

Schema of Topics for Kafka Topics

Every Snowflake table loaded by the Kafka connector has a schema consisting of two VARIANT columns: