Exam4Training

Snowflake DSA-C02 SnowPro Advanced: Data Scientist Certification Exam Online Training

Question #1

Which type of Machine learning Data Scientist generally used for solving classification and regression problems?

  • A . Supervised
  • B . Unsupervised
  • C . Reinforcement Learning
  • D . Instructor Learning
  • E . Regression Learning

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Supervised Learning

Overview:

Supervised learning is a type of machine learning that uses labeled data to train machine learning

models. In labeled data, the output is already known. The model just needs to map the inputs to the

respective outputs.

Algorithms:

Some of the most popularly used supervised learning algorithms are:

・ Linear Regression

・ Logistic Regression

・ Support Vector Machine

・ K Nearest Neighbor

・ Decision Tree

・ Random Forest

・ Naive Bayes

Working:

Supervised learning algorithms take labelled inputs and map them to the known outputs, which means you already know the target variable.

Supervised Learning methods need external supervision to train machine learning models. Hence, the name supervised. They need guidance and additional information to return the desired result. Applications:

Supervised learning algorithms are generally used for solving classification and regression problems. Few of the top supervised learning applications are weather prediction, sales forecasting, stock price analysis.

Question #2

Which of the learning methodology applies conditional probability of all the variables with respective the dependent variable?

  • A . Reinforcement learning
  • B . Unsupervised learning
  • C . Artificial learning
  • D . Supervised learning

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Supervised learning is a type of machine learning where we train a model using labeled data. In this learning paradigm, the model learns from the training data to make predictions or infer mappings. Conditional probability often plays a role in this, especially in algorithms like Naive Bayes, where the goal is to compute the probability of a certain class given the features (variables) of the data, which is fundamentally a conditional probability.

Here’s a brief rundown of the other options:

Question #3

In a simple linear regression model (One independent variable), If we change the input variable by 1 unit.

How much output variable will change?

  • A . by 1
  • B . no change
  • C . by intercept
  • D . by its slope

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

What is linear regression?

Linear regression analysis is used to predict the value of a variable based on the value of another variable. The variable you want to predict is called the dependent variable. The variable you are using to predict the other variable’s value is called the independent variable.

Linear regression attempts to model the relationship between two variables by fitting a linear equation to observed data. One variable is considered to be an explanatory variable, and the other is considered to be a dependent variable. For example, a modeler might want to relate the weights of individuals to their heights using a linear regression model.

A linear regression line has an equation of the form Y = a + bX, where X is the explanatory variable and Y is the dependent variable. The slope of the line is b, and a is the intercept (the value of y when x = 0).

For linear regression Y=a+bx+error.

If neglect error then Y=a+bx. If x increases by 1, then Y = a+b(x+1) which implies Y=a+bx+b. So Y

increases by its slope.

For linear regression Y=a+bx+error. If neglect error then Y=a+bx. If x increases by 1, then Y = a+b(x+1) which implies Y=a+bx+b. So Y increases by its slope.

Question #4

There are a couple of different types of classification tasks in machine learning, Choose the Correct Classification which best categorized the below Application Tasks in Machine learning?

・ To detect whether email is spam or not

・ To determine whether or not a patient has a certain disease in medicine.

・ To determine whether or not quality specifications were met when it comes to QA (Quality Assurance).

  • A . Multi-Label Classification
  • B . Multi-Class Classification
  • C . Binary Classification
  • D . Logistic Regression

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The Supervised Machine Learning algorithm can be broadly classified into Regression and Classification Algorithms. In Regression algorithms, we have predicted the output for continuous values, but to predict the categorical values, we need Classification algorithms.

What is the Classification Algorithm?

The Classification algorithm is a Supervised Learning technique that is used to identify the category of new observations on the basis of training data. In Classification, a program learns from the given dataset or observations and then classifies new observation into a number of classes or groups. Such as, Yes or No, 0 or 1, Spam or Not Spam, cat or dog, etc. Classes can be called as targets/labels or categories.

Unlike regression, the output variable of Classification is a category, not a value, such as "Green or Blue", "fruit or animal", etc. Since the Classification algorithm is a Supervised learning technique, hence it takes labeled input data, which means it contains input with the corresponding output. In classification algorithm, a discrete output function(y) is mapped to input variable(x).

y=f(x), where y = categorical output

The best example of an ML classification algorithm is Email Spam Detector.

The main goal of the Classification algorithm is to identify the category of a given dataset, and these algorithms are mainly used to predict the output for the categorical data.

The algorithm which implements the classification on a dataset is known as a classifier. There are two types of Classifications:

Binary Classifier: If the classification problem has only two possible outcomes, then it is called as Binary Classifier.

Examples: YES or NO, MALE or FEMALE, SPAM or NOT SPAM, CAT or DOG, etc. Multi-class Classifier: If a classification problem has more than two outcomes, then it is called as Multi-class Classifier.

Example: Classifications of types of crops, Classification of types of music.

Binary classification in deep learning refers to the type of classification where we have two class labels C one normal and one abnormal.

Some examples of binary classification use:

・ To detect whether email is spam or not

・ To determine whether or not a patient has a certain disease in medicine.

・ To determine whether or not quality specifications were met when it comes to QA (Quality Assurance).

For example, the normal class label would be that a patient has the disease, and the abnormal class label would be that they do not, or vice-versa.

As is with every other type of classification, it is only as good as the binary classification dataset that it has C or, in other words, the more training and data it has, the better it is.

Question #5

Which of the following method is used for multiclass classification?

  • A . one vs rest
  • B . loocv
  • C . all vs one
  • D . one vs another

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Binary vs. Multi-Class Classification

Classification problems are common in machine learning. In most cases, developers prefer using a supervised machine-learning approach to predict class tables for a given dataset. Unlike regression, classification involves designing the classifier model and training it to input and categorize the test dataset. For that, you can divide the dataset into either binary or multi-class modules.

As the name suggests, binary classification involves solving a problem with only two class labels. This makes it easy to filter the data, apply classification algorithms, and train the model to predict outcomes. On the other hand, multi-class classification is applicable when there are more than two class labels in the input train data. The technique enables developers to categorize the test data into multiple binary class labels.

That said, while binary classification requires only one classifier model, the one used in the multi-

class approach depends on the classification technique. Below are the two models of the multi-class

classification algorithm.

One-Vs-Rest Classification Model for Multi-Class Classification

Also known as one-vs-all, the one-vs-rest model is a defined heuristic method that leverages a binary classification algorithm for multi-class classifications. The technique involves splitting a multi-class dataset into multiple sets of binary problems. Following this, a binary classifier is trained to handle each binary classification model with the most confident one making predictions.

For instance, with a multi-class classification problem with red, green, and blue datasets, binary

classification can be categorized as follows:

Problem one: red vs. green/blue

Problem two: blue vs. green/red

Problem three: green vs. blue/red

The only challenge of using this model is that you should create a model for every class. The three classes require three models from the above datasets, which can be challenging for large sets of data with million rows, slow models, such as neural networks and datasets with a significant number of classes.

The one-vs-rest approach requires individual models to prognosticate the probability-like score. The class index with the largest score is then used to predict a class. As such, it is commonly used for classification algorithms that can naturally predict scores or numerical class membership such as perceptron and logistic regression.

Question #6

Which ones are the key actions in the data collection phase of Machine learning included?

  • A . Label
  • B . Ingest and Aggregate
  • C . Probability
  • D . Measure

Reveal Solution Hide Solution

Correct Answer: A, B, D
A, B, D

Explanation:

In the context of the data collection phase of machine learning, the key actions would be:

Question #7

Which ones are the type of visualization used for Data exploration in Data Science?

  • A . Heat Maps
  • B . Newton AI
  • C . Feature Distribution by Class
  • D . 2D-Density Plots
  • E . Sand Visualization

Reveal Solution Hide Solution

Correct Answer: A, C, D
A, C, D

Explanation:

For data exploration in Data Science, visualizations are used to understand the underlying patterns, relationships, anomalies, and distributions in the data. Among the given options:

Question #8

Which one is not the feature engineering techniques used in ML data science world?

  • A . Imputation
  • B . Binning
  • C . One hot encoding
  • D . Statistical

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Feature engineering is the pre-processing step of machine learning, which is used to transform raw data into features that can be used for creating a predictive model using Machine learning or statistical Modelling.

What is a feature?

Generally, all machine learning algorithms take input data to generate the output. The input data re-mains in a tabular form consisting of rows (instances or observations) and columns (variable or at-tributes), and these attributes are often known as features. For example, an image is an instance in computer vision, but a line in the image could be the feature. Similarly, in NLP, a document can be an observation, and the word count could be the feature. So, we can say a feature is an attribute that impacts a problem or is useful for the problem.

What is Feature Engineering?

Feature engineering is the pre-processing step of machine learning, which extracts features from raw data. It helps to represent an underlying problem to predictive models in a better way, which as a result, improve the accuracy of the model for unseen data. The predictive model contains predictor variables and an outcome variable, and while the feature engineering process selects the most useful predictor variables for the model.

Some of the popular feature engineering techniques include:

Question #8

Which one is not the feature engineering techniques used in ML data science world?

  • A . Imputation
  • B . Binning
  • C . One hot encoding
  • D . Statistical

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Feature engineering is the pre-processing step of machine learning, which is used to transform raw data into features that can be used for creating a predictive model using Machine learning or statistical Modelling.

What is a feature?

Generally, all machine learning algorithms take input data to generate the output. The input data re-mains in a tabular form consisting of rows (instances or observations) and columns (variable or at-tributes), and these attributes are often known as features. For example, an image is an instance in computer vision, but a line in the image could be the feature. Similarly, in NLP, a document can be an observation, and the word count could be the feature. So, we can say a feature is an attribute that impacts a problem or is useful for the problem.

What is Feature Engineering?

Feature engineering is the pre-processing step of machine learning, which extracts features from raw data. It helps to represent an underlying problem to predictive models in a better way, which as a result, improve the accuracy of the model for unseen data. The predictive model contains predictor variables and an outcome variable, and while the feature engineering process selects the most useful predictor variables for the model.

Some of the popular feature engineering techniques include:

Question #8

Which one is not the feature engineering techniques used in ML data science world?

  • A . Imputation
  • B . Binning
  • C . One hot encoding
  • D . Statistical

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Feature engineering is the pre-processing step of machine learning, which is used to transform raw data into features that can be used for creating a predictive model using Machine learning or statistical Modelling.

What is a feature?

Generally, all machine learning algorithms take input data to generate the output. The input data re-mains in a tabular form consisting of rows (instances or observations) and columns (variable or at-tributes), and these attributes are often known as features. For example, an image is an instance in computer vision, but a line in the image could be the feature. Similarly, in NLP, a document can be an observation, and the word count could be the feature. So, we can say a feature is an attribute that impacts a problem or is useful for the problem.

What is Feature Engineering?

Feature engineering is the pre-processing step of machine learning, which extracts features from raw data. It helps to represent an underlying problem to predictive models in a better way, which as a result, improve the accuracy of the model for unseen data. The predictive model contains predictor variables and an outcome variable, and while the feature engineering process selects the most useful predictor variables for the model.

Some of the popular feature engineering techniques include:

Question #8

Which one is not the feature engineering techniques used in ML data science world?

  • A . Imputation
  • B . Binning
  • C . One hot encoding
  • D . Statistical

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Feature engineering is the pre-processing step of machine learning, which is used to transform raw data into features that can be used for creating a predictive model using Machine learning or statistical Modelling.

What is a feature?

Generally, all machine learning algorithms take input data to generate the output. The input data re-mains in a tabular form consisting of rows (instances or observations) and columns (variable or at-tributes), and these attributes are often known as features. For example, an image is an instance in computer vision, but a line in the image could be the feature. Similarly, in NLP, a document can be an observation, and the word count could be the feature. So, we can say a feature is an attribute that impacts a problem or is useful for the problem.

What is Feature Engineering?

Feature engineering is the pre-processing step of machine learning, which extracts features from raw data. It helps to represent an underlying problem to predictive models in a better way, which as a result, improve the accuracy of the model for unseen data. The predictive model contains predictor variables and an outcome variable, and while the feature engineering process selects the most useful predictor variables for the model.

Some of the popular feature engineering techniques include:

Question #8

Which one is not the feature engineering techniques used in ML data science world?

  • A . Imputation
  • B . Binning
  • C . One hot encoding
  • D . Statistical

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Feature engineering is the pre-processing step of machine learning, which is used to transform raw data into features that can be used for creating a predictive model using Machine learning or statistical Modelling.

What is a feature?

Generally, all machine learning algorithms take input data to generate the output. The input data re-mains in a tabular form consisting of rows (instances or observations) and columns (variable or at-tributes), and these attributes are often known as features. For example, an image is an instance in computer vision, but a line in the image could be the feature. Similarly, in NLP, a document can be an observation, and the word count could be the feature. So, we can say a feature is an attribute that impacts a problem or is useful for the problem.

What is Feature Engineering?

Feature engineering is the pre-processing step of machine learning, which extracts features from raw data. It helps to represent an underlying problem to predictive models in a better way, which as a result, improve the accuracy of the model for unseen data. The predictive model contains predictor variables and an outcome variable, and while the feature engineering process selects the most useful predictor variables for the model.

Some of the popular feature engineering techniques include:

Question #8

Which one is not the feature engineering techniques used in ML data science world?

  • A . Imputation
  • B . Binning
  • C . One hot encoding
  • D . Statistical

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Feature engineering is the pre-processing step of machine learning, which is used to transform raw data into features that can be used for creating a predictive model using Machine learning or statistical Modelling.

What is a feature?

Generally, all machine learning algorithms take input data to generate the output. The input data re-mains in a tabular form consisting of rows (instances or observations) and columns (variable or at-tributes), and these attributes are often known as features. For example, an image is an instance in computer vision, but a line in the image could be the feature. Similarly, in NLP, a document can be an observation, and the word count could be the feature. So, we can say a feature is an attribute that impacts a problem or is useful for the problem.

What is Feature Engineering?

Feature engineering is the pre-processing step of machine learning, which extracts features from raw data. It helps to represent an underlying problem to predictive models in a better way, which as a result, improve the accuracy of the model for unseen data. The predictive model contains predictor variables and an outcome variable, and while the feature engineering process selects the most useful predictor variables for the model.

Some of the popular feature engineering techniques include:

Question #8

Which one is not the feature engineering techniques used in ML data science world?

  • A . Imputation
  • B . Binning
  • C . One hot encoding
  • D . Statistical

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Feature engineering is the pre-processing step of machine learning, which is used to transform raw data into features that can be used for creating a predictive model using Machine learning or statistical Modelling.

What is a feature?

Generally, all machine learning algorithms take input data to generate the output. The input data re-mains in a tabular form consisting of rows (instances or observations) and columns (variable or at-tributes), and these attributes are often known as features. For example, an image is an instance in computer vision, but a line in the image could be the feature. Similarly, in NLP, a document can be an observation, and the word count could be the feature. So, we can say a feature is an attribute that impacts a problem or is useful for the problem.

What is Feature Engineering?

Feature engineering is the pre-processing step of machine learning, which extracts features from raw data. It helps to represent an underlying problem to predictive models in a better way, which as a result, improve the accuracy of the model for unseen data. The predictive model contains predictor variables and an outcome variable, and while the feature engineering process selects the most useful predictor variables for the model.

Some of the popular feature engineering techniques include:

Question #15

Skewness of Normal distribution is ___________

  • A . Negative
  • B . Positive
  • C . 0
  • D . Undefined

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Since the normal curve is symmetric about its mean, its skewness is zero. This is a theoretical explanation for mathematical proofs, you can refer to books or websites that speak on the same in detail.

Question #16

What is the formula for measuring skewness in a dataset?

  • A . MEAN – MEDIAN
  • B . MODE – MEDIAN
  • C . (3(MEAN – MEDIAN))/ STANDARD DEVIATION
  • D . (MEAN – MODE)/ STANDARD DEVIATION

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Since the normal curve is symmetric about its mean, its skewness is zero. This is a theoretical explanation for mathematical proofs, you can refer to books or websites that speak on the same in detail.

Question #17

What Can Snowflake Data Scientist do in the Snowflake Marketplace as Provider?

  • A . Publish listings for free-to-use datasets to generate interest and new opportunities among the Snowflake customer base.
  • B . Publish listings for datasets that can be customized for the consumer.
  • C . Share live datasets securely and in real-time without creating copies of the data or im-posing data integration tasks on the consumer.
  • D . Eliminate the costs of building and maintaining APIs and data pipelines to deliver data to customers.

Reveal Solution Hide Solution

Correct Answer: A, B, C, D
A, B, C, D

Explanation:

All are correct!

About the Snowflake Marketplace

You can use the Snowflake Marketplace to discover and access third-party data and services, as well as market your own data products across the Snowflake Data Cloud.

As a data provider, you can use listings on the Snowflake Marketplace to share curated data offerings with many consumers simultaneously, rather than maintain sharing relationships with each individual consumer. With Paid Listings, you can also charge for your data products.

As a consumer, you might use the data provided on the Snowflake Marketplace to explore and access the following:

Historical data for research, forecasting, and machine learning.

Up-to-date streaming data, such as current weather and traffic conditions.

Specialized identity data for understanding subscribers and audience targets.

New insights from unexpected sources of data.

The Snowflake Marketplace is available globally to all non-VPS Snowflake accounts hosted on Amazon Web Services, Google Cloud Platform, and Microsoft Azure, with the exception of Microsoft Azure Government. Support for Microsoft Azure Government is planned.

Question #18

What Can Snowflake Data Scientist do in the Snowflake Marketplace as Consumer?

  • A . Discover and test third-party data sources.
  • B . Receive frictionless access to raw data products from vendors.
  • C . Combine new datasets with your existing data in Snowflake to derive new business in-sights.
  • D . Use the business intelligence (BI)/ML/Deep learning tools of her choice.

Reveal Solution Hide Solution

Correct Answer: A, B, C, D
A, B, C, D

Explanation:

As a consumer, you can do the following:

・ Discover and test third-party data sources.

・ Receive frictionless access to raw data products from vendors.

・ Combine new datasets with your existing data in Snowflake to derive new business insights.

・ Have datasets available instantly and updated continually for users.

・ Eliminate the costs of building and maintaining various APIs and data pipelines to load and up-date data.

・ Use the business intelligence (BI) tools of your choice.

Question #19

Which one is the incorrect option to share data in Snowflake?

  • A . a Listing, in which you offer a share and additional metadata as a data product to one or more accounts.
  • B . a Direct Marketplace, in which you directly share specific database objects (a share) to another account in your region using Snowflake Marketplace.
  • C . a Direct Share, in which you directly share specific database objects (a share) to another account in your region.
  • D . a Data Exchange, in which you set up and manage a group of accounts and offer a share to that group.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Options for Sharing in Snowflake

You can share data in Snowflake using one of the following options:

・ a Listing, in which you offer a share and additional metadata as a data product to one or more ac-counts,

・ a Direct Share, in which you directly share specific database objects (a share) to another account in your region,

・ a Data Exchange, in which you set up and manage a group of accounts and offer a share to that group.

Question #20

Data providers add Snowflake objects (databases, schemas, tables, secure views, etc.) to a share using.

Which of the following options?

  • A . Grant privileges on objects to a share via Account role.
  • B . Grant privileges on objects directly to a share.
  • C . Grant privileges on objects to a share via a database role.
  • D . Grant privileges on objects to a share via a third-party role.

Reveal Solution Hide Solution

Correct Answer: B, C
B, C

Explanation:

What is a Share?

Shares are named Snowflake objects that encapsulate all of the information required to share a database.

Data providers add Snowflake objects (databases, schemas, tables, secure views, etc.) to a share using either or both of the following options:

Option 1: Grant privileges on objects to a share via a database role.

Option 2: Grant privileges on objects directly to a share.

You choose which accounts can consume data from the share by adding the accounts to the share. After a database is created (in a consumer account) from a share, all the shared objects are accessible to users in the consumer account.

Shares are secure, configurable, and controlled completely by the provider account:

・ New objects added to a share become immediately available to all consumers, providing real-time access to shared data.

Access to a share (or any of the objects in a share) can be revoked at any time.

Question #21

Secure Data Sharing do not let you share which of the following selected objects in a database in your account with other Snowflake accounts?

  • A . Sequences
  • B . Tables
  • C . External tables
  • D . Secure UDFs

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Secure Data Sharing lets you share selected objects in a database in your account with other Snow-flake accounts.

You can share the following Snowflake database objects:

Tables

External tables

Secure views

Secure materialized views

Secure UDFs

Snowflake enables the sharing of databases through shares, which are created by data providers and “imported” by data consumers.

Question #22

Which one is incorrect understanding about Providers of Direct share?

  • A . A data provider is any Snowflake account that creates shares and makes them available to other Snowflake accounts to consume.
  • B . As a data provider, you share a database with one or more Snowflake accounts.
  • C . You can create as many shares as you want, and add as many accounts to a share as you want.
  • D . If you want to provide a share to many accounts, you can do the same via Direct Share.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

If you want to provide a share to many accounts, you might want to use a listing or a data ex-change.

Question #23

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

  • A . Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.
  • B . Each reader account belongs to the provider account that created it.
  • C . Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.
  • D . Data sharing is only possible between Snowflake accounts.

Reveal Solution Hide Solution

Correct Answer: A,B,C
A,B,C

Explanation:

Let’s evaluate each of the statements regarding Reader Accounts for Third-Party Access in Snowflake:

Question #23

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

  • A . Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.
  • B . Each reader account belongs to the provider account that created it.
  • C . Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.
  • D . Data sharing is only possible between Snowflake accounts.

Reveal Solution Hide Solution

Correct Answer: A,B,C
A,B,C

Explanation:

Let’s evaluate each of the statements regarding Reader Accounts for Third-Party Access in Snowflake:

Question #23

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

  • A . Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.
  • B . Each reader account belongs to the provider account that created it.
  • C . Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.
  • D . Data sharing is only possible between Snowflake accounts.

Reveal Solution Hide Solution

Correct Answer: A,B,C
A,B,C

Explanation:

Let’s evaluate each of the statements regarding Reader Accounts for Third-Party Access in Snowflake:

Question #23

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

  • A . Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.
  • B . Each reader account belongs to the provider account that created it.
  • C . Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.
  • D . Data sharing is only possible between Snowflake accounts.

Reveal Solution Hide Solution

Correct Answer: A,B,C
A,B,C

Explanation:

Let’s evaluate each of the statements regarding Reader Accounts for Third-Party Access in Snowflake:

Question #23

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

  • A . Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.
  • B . Each reader account belongs to the provider account that created it.
  • C . Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.
  • D . Data sharing is only possible between Snowflake accounts.

Reveal Solution Hide Solution

Correct Answer: A,B,C
A,B,C

Explanation:

Let’s evaluate each of the statements regarding Reader Accounts for Third-Party Access in Snowflake:

Question #23

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

  • A . Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.
  • B . Each reader account belongs to the provider account that created it.
  • C . Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.
  • D . Data sharing is only possible between Snowflake accounts.

Reveal Solution Hide Solution

Correct Answer: A,B,C
A,B,C

Explanation:

Let’s evaluate each of the statements regarding Reader Accounts for Third-Party Access in Snowflake:

Question #23

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

  • A . Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.
  • B . Each reader account belongs to the provider account that created it.
  • C . Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.
  • D . Data sharing is only possible between Snowflake accounts.

Reveal Solution Hide Solution

Correct Answer: A,B,C
A,B,C

Explanation:

Let’s evaluate each of the statements regarding Reader Accounts for Third-Party Access in Snowflake:

Question #23

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

  • A . Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.
  • B . Each reader account belongs to the provider account that created it.
  • C . Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.
  • D . Data sharing is only possible between Snowflake accounts.

Reveal Solution Hide Solution

Correct Answer: A,B,C
A,B,C

Explanation:

Let’s evaluate each of the statements regarding Reader Accounts for Third-Party Access in Snowflake:

Question #23

As Data Scientist looking out to use Reader account, Which ones are the correct considerations about Reader Accounts for Third-Party Access?

  • A . Reader accounts (formerly known as “read-only accounts”) provide a quick, easy, and cost-effective way to share data without requiring the consumer to become a Snowflake customer.
  • B . Each reader account belongs to the provider account that created it.
  • C . Users in a reader account can query data that has been shared with the reader account, but cannot perform any of the DML tasks that are allowed in a full account, such as data loading, insert, update, and similar data manipulation operations.
  • D . Data sharing is only possible between Snowflake accounts.

Reveal Solution Hide Solution

Correct Answer: A,B,C
A,B,C

Explanation:

Let’s evaluate each of the statements regarding Reader Accounts for Third-Party Access in Snowflake:

Question #32

SHOW GRANTS OF SHARE product_s;

  • A . GRANT USAGE ON DATABASE product_db TO SHARE product_s;
  • B . CREATE DIRECT SHARE product_s;
  • C . GRANT SELECT ON TABLE sales_db. product_agg.Item_agg TO SHARE product_s;
  • D . ALTER SHARE product_s ADD ACCOUNTS=xy12345, yz23456;

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

CREATE SHARE product_s is the correct Snowsql command to create Share object.

Rest are correct ones.

https://docs.snowflake.com/en/user-guide/data-sharing-provider#creating-a-share-using-sql

Question #33

Which object records data manipulation language (DML) changes made to tables, including inserts, updates, and deletes, as well as metadata about each change, so that actions can be taken using the changed data of Data Science Pipelines?

  • A . Task
  • B . Dynamic tables
  • C . Stream
  • D . Tags
  • E . Delta
  • F . OFFSET

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

A stream object records data manipulation language (DML) changes made to tables, including inserts, updates, and deletes, as well as metadata about each change, so that actions can be taken using the changed data. This process is referred to as change data capture (CDC). An individual table stream tracks the changes made to rows in a source table. A table stream (also referred to as simply a “stream”) makes a “change table” available of what changed, at the row level, between two transactional points of time in a table. This allows querying and consuming a sequence of change records in a transactional fashion.

Streams can be created to query change data on the following objects:

・ Standard tables, including shared tables.

・ Views, including secure views

・ Directory tables

・ Event tables

Question #34

Which are the following additional Metadata columns Stream contains that could be used for creating Efficient Data science Pipelines & helps in transforming only the New/Modified data only?

  • A . METADATA$ACTION
  • B . METADATA$FILE_ID
  • C . METADATA$ISUPDATE
  • D . METADATA$DELETE
  • E . METADATA$ROW_ID

Reveal Solution Hide Solution

Correct Answer: A, C, E
A, C, E

Explanation:

A stream stores an offset for the source object and not any actual table columns or data. When que-ried, a stream accesses and returns the historic data in the same shape as the source object (i.e. the same column names and ordering) with the following additional columns: METADATA$ACTION

Indicates the DML operation (INSERT, DELETE) recorded.

METADATA$ISUPDATE

Indicates whether the operation was part of an UPDATE statement. Updates to rows in the source object are represented as a pair of DELETE and INSERT records in the stream with a metadata column METADATA$ISUPDATE values set to TRUE.

Note that streams record the differences between two offsets. If a row is added and then updated in the current offset, the delta change is a new row. The METADATA$ISUPDATE row records a FALSE

value.

METADATA$ROW_ID

Specifies the unique and immutable ID for the row, which can be used to track changes to specific rows over time.

Exit mobile version