70-768 Developing SQL Data Models exam is a hot Microsoft certification exam, Exam4Training offers you the latest free online 70-768 dumps to practice. You can get online training in the following questions, all these questions are verified by Microsoft experts. If this exam changed, we will share new update questions.
Certification Provider: Microsoft Exam Name: Developing SQL Data Models Exam Code: 70-768 Official Exam Time: 120 mins Number of questions in the Official Exam: 40-60 Q&As Latest update time in our database: September 27,2023 70-768 Official Exam Topics:
Topic1 : Design a multidimensional business intelligence (BI) semantic model (25–30%)
Topic2 : Create a multidimensional database by using Microsoft SQL Server Analysis Services (SSAS) / Design, develop, and create multidimensional databases; select a storage model
Topic3 : Design and implement dimensions in a cube / Implement measures and measure groups in a cube
Topic4 : Design and implement measures, measure groups, granularity, calculated measures, and aggregate functions; define semi-additive behavior / Design a tabular BI semantic model (20–25%)
Topic5 : Design and publish a tabular data model / Design measures, relationships, hierarchies, partitions, perspectives, and calculated columns; create a time table; publish from Microsoft Visual Studio; import from Microsoft PowerPivot; select a deployment option, including Processing Option, Transactional Deployment, and Query Mode
Topic6 : Configure, manage, and secure a tabular model / Configure tabular model storage and data refresh, configure refresh interval settings, configure user security and permissions, configure row-level security
Topic7 : Develop a tabular model to access data in near real time / Use DirectQuery with Oracle, Teradata, Excel, and PivotTables; convert in-memory queries to DirectQuery
Topic8 : Develop queries using Multidimensional Expressions (MDX) and Data Analysis Expressions (DAX) (15–20%) / Implement basic MDX structures and functions, including tuples, sets, and TopCount
Topic9 : Implement custom MDX solutions / Create custom MDX or logical solutions for pre-prepared case tasks or business rules, define a SCOPE statement
Topic10 : Create formulas by using the DAX language / Configure and maintain SQL Server Analysis Services (SSAS) (30–35%)
Topic11 : Plan and deploy SSAS / Configure memory limits, configure Non-Union Memory Architecture (NUMA), configure disk layout, determine SSAS instance placement
Topic12 : Monitor and optimize performance / Configure and manage processing
Topic13 : Configure partition processing; configure dimension processing; use Process Default, Process Full, Process Clear, Process Data, Process Add, Process Update, Process Index, Process Structure, and Process Clear Structure processing methods; configure Parallel, Sequential, and Writeback processing settings /
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.
You have a Microsoft SQL Server Analysis Services (SSAS) multidimensional database that stores customer and order data for customers in the United States only.
The database contains the following objects:
You must create a KPI named Large Sales Target that uses the Traffic Light indicator to display status.
The KPI must contain:
You need to create the KPI.
Solution: You set the value of the Status expression to:
You need to create the cube processing job and the dimension processing job.
Which processing task should you use for each job? To answer, drag the appropriate processing tasks to the correct locations. Each processing task may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Answer:
Explanation:
Box 1: ProcessData
Processes data only without building aggregations or indexes. If there is data is in the partitions, it will be dropped before re-populating the partition with source data.
Box 2: Process Update
Forces a re-read of data and an update of dimension attributes. Flexible aggregations and indexes on related partitions will be dropped.
You are a developer for a Seattle-based company. The company is expanding globally. Many company employees speak fluent Mandarin and read Simplified Chinese.
You have six tabular data models that are deployed to two instances of Microsoft SQL Server Analysis Services (SSAS).
Users report that the query takes a long time to complete.
You are planning the disk space allocations for a new Microsoft SQL Server Analysis Services deployment. You plan to move several relational data file databases to the new SSAS instance. The databases require a total of 10 GB of disk space.
You also plan to deploy Cubes and Aggregations and use Object Processing. Cubes will have small fact tables and few dimension members. No unnecessary aggregations will be created. You plan to process an entire cube in a single transaction.
Data Models
One of the data models is named CustomerSales. This data model contains eight tables. The model includes a table named Sales that defines several measures, including a measure named PriorYearSales. The PriorYearSales measure is referenced by other measures, and is not intended to be analyzed directly by users. You must translate the metadata for all fata the CustomerSales data model to Simplifies Chinese. Team members from the Shanghai office assist with identifying appropriate translations.
A data model named OrderAnalysis is deployed to one of the SSAS instances. Order data is loaded into the OrderAnalysis data as part of an overnight process. You observe that the model is not up-to-date.
The business analysis team uses a variety of client applications to issue MDX queries against OrderAnalysis. Order data must be completely up-to-date.
The OrderAnalysis model has two user-defined hierarchies that are defined in a table named Order. New customers are only added once per day. The overnight process is sufficiently up-to-date for the Customer data to provide optimal performance while achieving the data currency goals whenever possible.
Databases
You deploy a database named DB1 to an SSAS instance as a project by using SQL Server Data Tools. Data analysts report that they cannot access near real time data from the SSAS SalesAnalysis model from DB1. You discover that the project has been deployed with the Direct Query Mode option set to OFF.
Most queries that use the SalesAnalysis data model use data from a table named FactInternetSales that is 20 gigabyte (GB) in size. Cached data must be available for the FactInternetSales table. All queries accessing the SalesAnalysis model must be executed in near real time.
DRAG DROP
A database named DB2 uses the InMemory query mode.
Users frequently run the following query:
You need to reconfigure the SSAS instance that hosts DB1.
Which three actions should perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Answer:
Explanation:
Step 1: Set the default mode for the data model to DirectQuery.
You discover that the project has been deployed with the Direct Query Mode option set to OFF.
Step 2: Set the mode for the FactInternetSales table’s partition to DirectQueryOnly.
Initially, even DirectQuery models are always created in memory. The default query mode for the workspace database is also set toDirectQuery with In-Memory. This hybrid working mode lets you use the cache of imported data for improved performance during the model design process, while validating the model against DirectQuery requirements.
From Scenario: Most queries that use the SalesAnalysis data model use data from a table named FactInternetSales that is 20 gigabyte (GB) in size. Cached data must be available for the FactInternetSales table. All queries accessing the SalesAnalysis model must be executed in near real time.
Step 3: Run Process Full for the FactInternetSales partition.
When Process Full is executed against an object that has already been processed, Analysis Services drops all data in the object, and then processes the object. This kind of processing is required when a structural change has been made to an object, for example, when an attribute hierarchy is added, deleted, or renamed
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.
You deploy a tabular data model to an instance of Microsoft SQL Server Analysis Services (SSAS). The model uses an in-memory cache to store and query data. The data set is already the same size as the available RAM on the server. Data volumes are likely to continue to increase rapidly.
Your data model contains multiple calculated tables.
The data model must begin processing each day at 2:00 and processing should be complete by 4:00 the same day. You observe that the data processing operation often does not complete before 7:00. This is adversely affecting team members.
You need to improve the performance.
Solution: Change the storage mode for the data model to DirectQuery.
Does the solution meet the goal? A . Yes B . No
Answer: A
Explanation:
By default, tabular models use an in-memory cache to store and query data. When tabular models query data residing in-memory, even complex queries can be incredibly fast. However, there are some limitations to using cached data. Namely, large data sets can exceed available memory, and data freshness requirements can be difficult if not impossible to achieve on a regular processing schedule.
DirectQuery overcomes these limitations while also leveraging RDBMS features making query execution more efficient.
With DirectQuery: +
Data is up-to-date, and there is no extra management overhead of having to maintain a separate copy of the data (in the in-memory cache). Changes to the underlying source data can be immediately reflected in queries against the data model.
Datasets can be larger than the memory capacity of an Analysis Services server.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.
You deploy a tabular data model to an instance of Microsoft SQL Server Analysis Services (SSAS). The model uses an in-memory cache to store and query data. The data set is already the same size as the available RAM on the server. Data volumes are likely to continue to increase rapidly.
Your data model contains multiple calculated tables.
The data model must begin processing each day at 2:00 and processing should be complete by 4:00 the same day. You observe that the data processing operation often does not complete before 7:00. This is adversely affecting team members.
You need to improve the performance.
Solution: Install solid-state disk drives to store the tabular data model.
Does the solution meet the goal? A . Yes B . No
Answer: B
Explanation:
By default, tabular models use an in-memory cache to store and query data. When tabular models query data residing in-memory, even complex queries can be incredibly fast. However, there are some limitations to using cached data. Namely, large data sets can exceed available memory, and data freshness requirements can be difficult if not impossible to achieve on a regular processing schedule.
DirectQuery overcomes these limitations while also leveraging RDBMS features making query execution more efficient.
You need to configure the SalesAnalysis cube to correct the sales analysis by customer calculation.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Answer:
Explanation:
Step 1: Open the cube editor, and open the Dimension Usage tab.
Step 2: Configure a relationship between the Customer dimension and the Sales measure group. Use Day as the granularity.
From scenario: The SalesAnalysis cube contains a fact table named CoffeeSale loaded from a table named FactSale in the data warehouse. The time granularity within the cube is 15 minutes. The cube is processed every night at 23:00. You determine that the fact table cannot be fully processed in the expected time. Users have reported slow query response times.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.
You have a Microsoft SQL Server Analysis Services (SSAS) multidimensional database that stores customer and order data for customers in the United States only.
The database contains the following objects:
You must create a KPI named Large Sales Target that uses the Traffic Light indicator to display status.
The KPI must contain:
You need to create the KPI.
Solution: You set the value of the Status expression to:
You need to resolve the issues that the users report.
Which processing options should you use? To answer, drag the appropriate processing option to the correct location or locations. Each processing option may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Answer:
Explanation:
Box1: Process Full:
When Process Full is executed against an object that has already been processed, Analysis Services drops all data in the object, and then processes the object. This kind of processing is required when a structural change has been made to an object, for example, when an attribute hierarchy is added, deleted, or renamed.
Box 2: Process Default
Detects the process state of database objects, and performs processing necessary to deliver unprocessed or partially processed objects to a fully processed state. If you change a data binding, Process Default will do a Process Full on the affected object.
Box 3:
Not Process Update: Forces a re-read of data and an update of dimension attributes. Flexible aggregations and indexes on related partitions will be dropped.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.
You have a Microsoft SQL Server Analysis Services (SSAS) multidimensional database that stores customer and order data for customers in the United States only.
The database contains the following objects:
You must create a KPI named Large Sales Target that uses the Traffic Light indicator to display status.
The KPI must contain:
You need to create the KPI.
Solution: You set the value of the Status expression to:
Wide World Importers imports and sells clothing. The company has a multidimensional Microsoft SQL Server Analysis Services instance. The server has 80 gigabytes (GB) of available physical memory. The following installed services are running on the server:
*SQL Server Database Engine
* SQL Server Analysis Services (multidimensional)
The database engine instance has been configured for a hard cap of 50 GB, and it cannot be lowered. The instance contains the following cubes: SalesAnalysis, OrderAnalysis.
Reports that are generated based on data from the OrderAnalysis cube take more time to complete when they are generated in the afternoon each day. You examine the server and observe that it is under significant memory pressure.
Processing for all cubes must occur automatically in increments. You create one job to process the cubes and another job to process the dimensions. You must configure a processing task for each job that optimizes performance. As the cubes grown in size, the overnight processing of the cubes often do not complete during the allowed maintenance time window.
SalesAnalysis
The SalesAnalysis cube is currently being tested before being used in production. Users report that day name attribute values are sorted alphabetically. Day name attribute values must be sorted chronologically. Users report that they are unable to query the cube while any cube processing operations are in progress. You need to maximize data availability during cube processing and ensure that you process both dimensions and measures.
OrderAnalysis
The OrderAnalysis cube is used for reporting and ad-hoc queries from Microsoft Excel. The data warehouse team adds a new table named Fact.Transaction to the cube. The Fact.Transaction table includes a column named Total Including Tax. You must add a new measure named Transactions C Total Including Tax to the cube. The measure must be calculated as the sum of the Total Including Tax column across any selected relevant dimensions.
Finance
The Finance cube is used to analyze General Ledger entries for the company.
Requirements
*You must minimize the time that it takes to process cubes while meeting the following requirements:
*The Sales cube requires overnight processing of dimensions, cubes, measure groups, and partitions.
*The OrderAnalysis cube requires overnight processing of dimensions only.
*The Finance cube requires overnight processing of dimensions only.
You need to configure the server to optimize the afternoon report generation based on the OrderAnalysis cube.
Which property should you configure? A . LowMemoryLimit B . VertiPaqPagingPolicy C . TotalMemoryLimit D . VirtualMemoryLimit
Answer: A
Explanation:
LowMemoryLimit: For multidimensional instances, a lower threshold at which the server first begins releasing memory allocated to infrequently used objects.
From scenario: Reports that are generated based on data from the OrderAnalysis cube take more time to complete when they are generated in the afternoon each day. You examine the server and observe that it is under significant memory pressure.