Exam4Training

Microsoft 70-765 Provisioning SQL Databases Online Training

Question #1

Topic 1, Implementing SQL in Azure

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.

Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.

You need to create the templates.

Solution: You use Visual Studio to create a XAML template that defines the deployment and configuration settings for the SQL Server environment.

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Azure ResourceManager template consists of JSON, not XAML, and expressions that you can use to construct values for your deployment.

A good JSON editor can simplify the task of creating templates.

Note: In its simplest structure, an Azure Resource Manager template contains the following elements:

{

"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",

"contentVersion": "",

"parameters": { },

"variables": { },

"resources": [ ],

"outputs": { }

}

References: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates

Question #2

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.

Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.

You need to create the templates.

Solution: You create the desired SQL Server configuration in an Azure Resource Group, then export the Resource Group template and save it to the Templates Library.

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Azure Resource Manager template consists of JSON, and expressions that you can use to construct values for your deployment.

A good JSON editor, not a Resource Group template, can simplify the task of creating templates.

Note: In its simplest structure, a Azure Resource Manager template contains the following elements:

{

"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",

"contentVersion": "",

"parameters": { },

"variables": { },

"resources": [ ],

"outputs": { }

}

References: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates

Question #3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.

Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.

You need to create the templates.

Solution: You use Visual Studio to create a JSON template that defines the deployment and configuration settings for the SQL Server environment.

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Azure Resource Manager template consists of JSON, not XAML, and expressions that you can use to construct values for your deployment.

A good JSON editor can simplify the task of creating templates.

Note: In its simplest structure, an Azure Resource Manager template contains the following elements:

{

"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",

"contentVersion": "",

"parameters": { },

"variables": { },

"resources": [ ],

"outputs": { }

}

References: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates

Question #4

You have a Microsoft SQL Server 2014 named SRV2014 that has a single tempdb database file. The tempdb database file is eight gigabytes (GB) in size.

You install a SQL Server 2016 instance named SQL Server 2016 by using default settings. The new instance has eight logical processor cores.

You plan to migrate the databases from SRV2014 to SRV2016.

You need to configure the tempdb database on SRV2016. The solution must minimize the number of future tempdb autogrowth events.

What should you do?

  • A . Increase the size of the tempdb datafile to 8 GB. In the tempdb database, set the value of the MAXDOP property to8.
  • B . Increase the size of the tempdb data files to1 GB.
  • C . Add seven additional tempdb data files. In the tempdb database, set the value of the MAXDOP property to8.
  • D . Setthe value for the autogrowth setting for the tempdb data file to128megabytes (MB). Add seven additional tempdb data files and set the autogrowth value to128 MB.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

In an effort to simplify the tempdb configuration experience, SQL Server 2016 setup has been extended to configure various properties for tempdb for multi-processor environments.

Question #4

You have a Microsoft SQL Server 2014 named SRV2014 that has a single tempdb database file. The tempdb database file is eight gigabytes (GB) in size.

You install a SQL Server 2016 instance named SQL Server 2016 by using default settings. The new instance has eight logical processor cores.

You plan to migrate the databases from SRV2014 to SRV2016.

You need to configure the tempdb database on SRV2016. The solution must minimize the number of future tempdb autogrowth events.

What should you do?

  • A . Increase the size of the tempdb datafile to 8 GB. In the tempdb database, set the value of the MAXDOP property to8.
  • B . Increase the size of the tempdb data files to1 GB.
  • C . Add seven additional tempdb data files. In the tempdb database, set the value of the MAXDOP property to8.
  • D . Setthe value for the autogrowth setting for the tempdb data file to128megabytes (MB). Add seven additional tempdb data files and set the autogrowth value to128 MB.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

In an effort to simplify the tempdb configuration experience, SQL Server 2016 setup has been extended to configure various properties for tempdb for multi-processor environments.

Question #4

You have a Microsoft SQL Server 2014 named SRV2014 that has a single tempdb database file. The tempdb database file is eight gigabytes (GB) in size.

You install a SQL Server 2016 instance named SQL Server 2016 by using default settings. The new instance has eight logical processor cores.

You plan to migrate the databases from SRV2014 to SRV2016.

You need to configure the tempdb database on SRV2016. The solution must minimize the number of future tempdb autogrowth events.

What should you do?

  • A . Increase the size of the tempdb datafile to 8 GB. In the tempdb database, set the value of the MAXDOP property to8.
  • B . Increase the size of the tempdb data files to1 GB.
  • C . Add seven additional tempdb data files. In the tempdb database, set the value of the MAXDOP property to8.
  • D . Setthe value for the autogrowth setting for the tempdb data file to128megabytes (MB). Add seven additional tempdb data files and set the autogrowth value to128 MB.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

In an effort to simplify the tempdb configuration experience, SQL Server 2016 setup has been extended to configure various properties for tempdb for multi-processor environments.

Question #7

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.

You have deployed several GS-series virtual machines (VMs) in Microsoft Azure. You plan to deploy Microsoft SQL Server in a development environment. Each VM has a dedicated disk for backups.

You need to backup a database to the local disk on a VM. The backup must be replicated to another region.

Which storage option should you use?

  • A . Premium P10 disk storage
  • B . Premium P20 diskstorage
  • C . Premium P30 disk storage
  • D . Standard locally redundant disk storage
  • E . Standard geo-redundant disk storage
  • F . Standard zone redundant blob storage
  • G . Standard locally redundant blob storage
  • H . Standard geo-redundant blob storage

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Note: SQL Database automatically creates a database backups and uses Azure read-access geo-redundant storage (RA-GRS) to provide geo-redundancy. These backups are created automatically and at no additional charge. You don’t need to do anything to make them happen. Database backups are an essential part of any business continuity and disaster recovery strategy because they protect your data from accidental corruption or deletion.

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-automated-backups

Question #8

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.

You have a virtual machine (VM) in Microsoft Azure, which has a 2 terabyte (TB) database. Microsoft SQL Server backups are performed by using Backup to URL.

You need to provision the storage account for the backups while minimizing costs.

Which storage option should you use?

  • A . Premium P10 disk storage
  • B . Premium P20 disk storage
  • C . Premium P30 disk storage
  • D . Standard locally redundant disk storage
  • E . Standard geo-redundant disk storage
  • F . Standard zone redundant blob storage
  • G . Standard locally redundant blob storage
  • H . Standard geo-redundant blob storage

Reveal Solution Hide Solution

Correct Answer: G
G

Explanation:

A URL specifies a Uniform Resource Identifier (URI) to a unique backup file. The URL is used to provide the location and name of the SQL Server backup file. The URL must point to an actual blob, not just a container. If the blob does not exist, it is created. If an existing blob is specified, BACKUP fails, unless the “WITH FORMAT” option is specified to overwrite the existing backup file in the blob.

LOCALLY REDUNDANT STORAGE (LRS) makes multiple synchronous copies of your data within a single datacenter.

Question #9

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.

You have deployed a GS-series virtual machine (VM) in Microsoft Azure. You plan to deploy Microsoft SQL Server.

You need to deploy a 30 megabyte (MB) database that requires 100 IOPS to be guaranteed while minimizing costs.

Which storage option should you use?

  • A . Premium P10 disk storage
  • B . Premium P20 disk storage
  • C . Premium P30 disk storage
  • D . Standard locally redundant disk storage
  • E . Standard geo-redundant disk storage
  • F . Standard zone redundant blob storage
  • G . Standard locally redundant blob storage
  • H . Standard geo-redundant blob storage

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Premium Storage Disks Limits

When you provision a disk against a Premium Storage account, how much input/output operations per second (IOPS) and throughput (bandwidth) it can get depends on the size of the disk. Currently, there are three types of Premium Storage disks: P10, P20, and P30.

Each one has specific limits for IOPS and throughput as specified in the following table:

References: https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage


Question #10

Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.

You have deployed several GS-series virtual machines (VMs) in Microsoft Azure. You plan to deploy Microsoft SQL Server in a development environment.

You need to provide storage to the environment that minimizes costs.

Which storage option should you use?

  • A . Premium P10 disk storage
  • B . Premium P20 disk storage
  • C . Premium P30 disk storage
  • D . Standard locally redundant disk storage
  • E . Standard geo-redundant disk storage
  • F . Standard zone redundant blob storage
  • G . Standard locally redundant blob storage
  • H . Standard geo-redundant blob storage

Reveal Solution Hide Solution

Correct Answer: D

Question #11

HOTSPOT

You use Resource Manager to deploy a new Microsoft SQL Server instance in a Microsoft Azure virtual machine (VM) that uses Premium storage. The combined initial size of the SQL Server user database files is expected to be over 200 gigabytes (GB). You must maximize performance for the database files and the log file.

You add the following additional drive volumes to the VM:

You have the following requirements:

You need to deploy the SQL instance.

In the table below, identify the drive where you must store each SQL Server file type.

NOTE: Make only one selection in each column. Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Enable read caching on the disk(s) hosting the data files and TempDB.

Do not enable caching on disk(s) hosting the log file. Host caching is not used for log files.


Question #12

DRAG DROP

You are building a new Always On Availability Group in Microsoft Azure. The corporate domain controllers (DCs) are attached to a virtual network named ProductionNetwork. The DCs are part of an availability set named ProductionServers1.

You create the first node of the availability group and add it to an availability set named ProductionServers2. The availability group node is a virtual machine (VM) that runs Microsoft SQL Server. You attach the node to ProductionNetwork.

The servers in the availability group must be directly accessible only by other company VMs in Azure.

You need to configure the second SQL Server VM for the availability group.

How should you configure the VM? To answer, drag the appropriate configuration settings to the correct target locations. Each configuration setting may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: ProductionNetwork

The virtual network is named ProductionNetwork.

Box 2: None /Not Assigned

As the servers in the availability group must be directly accessible only by other company VMs in Azure, there should be no Public IP address.

Box 3: ProductionServer2

You create the first node of the availability group and add it to an availability set named ProductionServers2. The availability group node is a virtual machine (VM) that runs Microsoft SQL Server.


Question #13

HOTSPOT

You plan to migrate a Microsoft SQL Server workload from an on-premises server to a Microsoft Azure virtual machine (VM). The current server contains 4 cores with an average CPU workload of 6 percent and a peak workload of 10 percent when using 2.4Ghz processors.

You gather the following metrics:

You need to design a SQL Server VM to support the migration while minimizing costs.

For each setting, which value should you use? To answer, select the appropriate storage option from each list in the answer area.

NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Data drive: Premium Storage

Transaction log drive: Standard Storage

TempDB drive: Premium Storage

Note: A standard disk is expected to handle 500 IOPS or 60MB/s.

A P10 Premium disk is expected to handle 500 IOPS.

A P20 Premium disk is expected to handle 2300 IOPS.

A P30 Premium disk is expected to handle 5000 IOPS.

VM size: A3

Max data disk throughput is 8×500 IOPS

References: https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-sizes


Question #14

You plan to migrate a database To Microsoft Azure SQL Database. The database requires 500 gigabytes (GB) of storage.

The database must support 50 concurrent logins. You must minimize the cost associated with hosting the database.

You need to create the database.

Which pricing tier should you use?

  • A . Standard S3 pricing tier
  • B . Premium P2tier
  • C . Standard S2 pricing tier
  • D . Premium P1 tier

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

For a database size of 500 GB the Premium tier is required.

Both P1 and P2 are adequate. P1 is preferred as it is cheaper.

Note:



Question #15

Topic 2, Manage databases and instances

HOTSPOT

You need to ensure that a user named Admin2 can manage logins.

How should you complete the Transact-SQL statements? To answer, select the appropriate Transact-SQL segments in the answer area.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Step 1: CREATE LOGIN

First you need to create a login for SQL Azure, it’s syntax is as follows:

CREATE LOGIN username WITH password=’password’;

Step 2, CREATE USER

Step 3: LOGIN

Users are created per database and are associated with logins. You must be connected to the database in where you want to create the user. In most cases, this is not the master database. Here is some sample Transact-SQL that creates a user:

CREATE USER readonlyuser FROM LOGIN readonlylogin;

Step 4: loginmanager

Members of the loginmanager role can create new logins in the master database.

References:

https://azure.microsoft.com/en-us/blog/adding-users-to-your-sql-azure-database/

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-manage-logins


Question #16

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.

You manage a Microsoft SQL Server environment with several databases.

You need to ensure that queries use statistical data and do not initialize values for local variables.

Solution: You enable the QUERY_OPTIMIZER_HOTFIXES option for the databases.

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

QUERY_OPTIMIZER_HOTFIXES = { ON | OFF | PRIMARY } enables or disables query optimization hotfixes regardless of the compatibility level of the database. This is equivalent to Trace Flag 4199.

References: https://msdn.microsoft.com/en-us/library/mt629158.aspx

Question #17

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.

You manage a Microsoft SQL Server environment with several databases.

You need to ensure that queries use statistical data and do not initialize values for local variables.

Solution: You enable the LEGACY_CARDINALITY_ESTIMATION option for the databases.

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY }

Enables you to set the query optimizer cardinality estimation model to the SQL Server 2012 and earlier version independent of the compatibility level of the database. This is equivalent to Trace Flag 9481.

References: https://msdn.microsoft.com/en-us/library/mt629158.aspx

Question #18

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.

You manage a Microsoft SQL Server environment with several databases.

You need to ensure that queries use statistical data and do not initialize values for local variables.

Solution: You enable the PARAMETER_SNIFFING option for the databases.

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

PARAMETER_SNIFFING = { ON | OFF | PRIMARY} enables or disables parameter sniffing. This is equivalent to Trace Flag 4136.

SQL server uses a process called parameter sniffing when executing queries or stored procedures that use parameters. During compilation, the value passed into the parameter is evaluated and used to create an execution plan. That value is also stored with the execution plan in the plan cache. Future executions of the plan will re-use the plan that was compiled with that reference value.

References: https://msdn.microsoft.com/en-us/library/mt629158.aspx

Question #19

You manage a Microsoft SQL Server environment in a Microsoft Azure virtual machine.

You must enable Always Encrypted for columns in a database.

You need to configure the key store provider.

What should you do?

  • A . Manually specify the column master key.
  • B . Modify the connection string for applications.
  • C . Auto-generate a column master key.
  • D . Use theWindows certificate store.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Always Encrypted supports multiple key stores for storing Always Encrypted column master keys. A column master key can be a certificate stored in Windows Certificate Store.

References: https://msdn.microsoft.com/en-us/library/mt723359.aspx

Question #20

You plan to deploy 20 Microsoft Azure SQL Database instances to an elastic pool in Azure to support a batch processing application.

Two of the databases in the pool reach their peak workload threshold at the same time every day. This leads to inconsistent performance for batch completion.

You need to ensure that all batches perform consistently.

What should you do?

  • A . Create an In-Memory table.
  • B . Increase the storage limit in the pool.
  • C . Implement a readable secondary database.
  • D . Increase the total number of elastic Database Transaction Units (eDTUs) in the pool.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In SQL Database, the relative measure of a database’s ability tohandle resource demands is expressed in Database Transaction Units (DTUs) for single databases and elastic DTUs (eDTUs) for databases in an elastic pool.

A pool is given a set number of eDTUs, for a set price. Within the pool, individual databases are given the flexibility to auto-scale within set parameters. Under heavy load, a database can consume more eDTUs to meet demand.

Additional eDTUs can be added to an existing pool with no database downtime.

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-pool

Question #21

Note: This questions is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.

You manage on-premises and Microsoft Azure SQL Database instances for a company. Your environment must support the Microsoft SQL Server 2012 ODBS driver.

You need to encrypt only specific columns in the database.

What should you implement?

  • A . transport-level encryption
  • B . cell-level encryption
  • C . Transparent Data Encryption
  • D . Always Encrypted
  • E . Encrypting File System
  • F . BitLocker
  • G . dynamic data masking

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

To encrypt columns you can configure Always Encrypted.

SQL Server Management Studio (SSMS) provides a wizard that helps you easilyconfigure Always Encrypted by setting up the column master key, column encryption key, and encrypted columns for you.

Always Encrypted allows client applications to encrypt sensitive data and never reveal the data or the encryption keys to SQL Server or Azure SQL Database. An Always Encrypted enabled driver, such as the ODBC Driver 13.1 for SQL Server, achieves this by transparently encrypting and decrypting sensitive data in the client application.

Note: The ODBC driver automatically determines which query parameters correspond to sensitive database columns (protected using Always Encrypted), and encrypts the values of those parameters before passing the data to SQL Server or Azure SQL Database. Similarly, the driver transparently decrypts data retrieved from encrypted database columns in query results.

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted-azure-key-vault#encrypt-columns-configure-always-encrypted

https://msdn.microsoft.com/en-us/library/mt637351(v=sql.110).aspx

Question #22

Note: This questions is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.

Your company has several Microsoft Azure SQL Database instances.

Data encryption should be allowed to be implemented by the client applications that access the data. Encryption keys should not be made available to the database engine.

You need to configure the database.

What should you implement?

  • A . transport-level encryption
  • B . cell-level encryption
  • C . Transparent Data Encryption
  • D . Always Encrypted
  • E . Encrypting FileSystem
  • F . BitLocker
  • G . dynamic data masking

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Using encryption during transit with Azure File Shares

Azure File Storage supports HTTPS when using the REST API, but is more commonly used as an SMB file share attached to a VM.

HTTPS is a transport-level security protocol.

Question #23

Note: This questions is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.

You deploy Microsoft SQL Server to a virtual machine in Azure. You distribute the database files and filegroups across multiple Azure storage disks.

You must be able to manage the databases as individual entities by using SQL Server Management Studio. All data in the databases must be stored encrypted. Backups must be encrypted by using the same key as the live copy of the database.

You need to secure the data.

What should you implement?

  • A . transport-level encryption
  • B . cell-level encryption
  • C . Transparent Data Encryption
  • D . Always Encrypted
  • E . Encrypting File System
  • F . BitLocker
  • G . dynamic data masking

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Transparent data encryption (TDE) encrypts your databases, associated backups, and transaction log files at rest without requiring changes to your applications.

TDE encrypts the storage of an entire database by using a symmetric key called the database encryption key. In SQL Database the database encryption key is protected by a built-in server certificate. The built-in server certificate is unique for each SQL Database server.

References: https://msdn.microsoft.com/en-us/library/dn948096.aspx

Question #24

DRAG DROP

You deploy a new Microsoft Azure SQL Database instance to support a variety of mobile applications and public websites. You plan to create a new security principal named User1.

The principal must have access to select all current and future objects in a database named Reporting. The activity and authentication of the database user must be limited to the Reporting database.

You need to create the new security principal.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Step 1, Step 2:

First you need to create a login for SQL Azure, it’s syntax is as follows:

CREATE LOGIN username WITH password=’password’;

This command needs to run in master db. Only afterwards can you run commands to create a user in the database.

Step 3:

Users are created per database and are associated with logins. You must be connected to the database in where you want to create the user. In most cases, this is not the master database. Here is some sample Transact-SQL that creates a user:

CREATE USER readonlyuser FROM LOGIN readonlylogin;

References: https://azure.microsoft.com/en-us/blog/adding-users-to-your-sql-azure-database/


Question #25

DRAG DROP

A new Azure Active Directory security principal named ReportUser@contoso.onmicrosoft.com should have access to select all current and future objects in the Reporting database. You should not grant the principal any other permissions. You should use your Active Directory Domain Services (AD DS) account to authenticate to the Azure SQL database.

You need to create the new security principal.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Step 1:

To provision an Azure AD-based contained database user (other than the server administrator that owns the database), connect to the database (here the Reporting database) with an Azure AD identity (not with a SQL Server account) that has access to the database.

Step 2: CREATE USER … FROM EXTERNAL PROVIDER

To create an Azure AD-based contained database user (other than the server administrator that owns the database), connect to the database with an Azure AD identity, as a user with at least the ALTER ANY USER permission. Then use the following Transact-SQL syntax:

CREATE USER <Azure_AD_principal_name>

FROM EXTERNAL PROVIDER;

Step 3:

Grant the proper reading permissions.

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-aad-authentication


Question #26

You are deploying a Microsoft SQL Server database that will support a mixed OLTP and OLAP workload. The target virtual machine has four CPUs.

You need to ensure that reports do not use all available system resources.

What should you do?

  • A . Enable Auto Close.
  • B . Increase the value for the Minimum System Memory setting.
  • C . Set MAXDOP to half the number of CPUs available.
  • D . Increase the value for the Minimum Memory per query setting.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

When an instance of SQL Server runs on a computer that has more than one microprocessor or CPU, it detects the best degree of parallelism, that is, the number of processors employed to run a single statement, for each parallel plan execution. You can use the max degree of parallelism option to limit the number of processors to use in parallel plan execution.

Question #27

Topic 3, Deploy and migrate applications

A company has an on-premises Microsoft SQL Server 2014 environment. The company has a main office in Seattle, and remote offices in Amsterdam and Tokyo. You plan to deploy a Microsoft Azure SQL Database instance to support a new application. You expect to have 100 users from each office.

In the past, users at remote sites reported issues when they used applications hosted at the Seattle office.

You need to optimize performance for users running reports while minimizing costs.

What should you do?

  • A . Implement an elastic pool.
  • B . Implement a standard database with readable secondaries in Asia and Europe, and then migrate the application.
  • C . Implement replication from an on-premises SQL Server database to the Azure SQL Database instance.
  • D . Deploy a database from the Premium service tier.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-geo-replication-transact-sql#add-secondary-database

Question #28

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.

You have a mission-critical application that stores data in a Microsoft SQL Server instance. The application runs several financial reports. The reports use a SQL Server-authenticated login named Reporting_User. All queries that write data to the database use Windows authentication.

Users report that the queries used to provide data for the financial reports take a long time to complete. The queries consume the majority of CPU and memory resources on the database server. As a result, read-write queries for the application also take a long time to complete.

You need to improve performance of the application while still allowing the report queries to finish.

Solution: You create a snapshot of the database. You configure all report queries to use the database snapshot.

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Use a Resource Governor instead.

References: https://msdn.microsoft.com/en-us/library/bb933866.aspx

Question #29

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.

You have a mission-critical application that stores data in a Microsoft SQL Server instance. The application runs several financial reports. The reports use a SQL Server-authenticated login named Reporting_User. All queries that write data to the database use Windows authentication.

Users report that the queries used to provide data for the financial reports take a long time to complete. The queries consume the majority of CPU and memory resources on the database server. As a result, read-write queries for the application also take a long time to complete.

You need to improve performance of the application while still allowing the report queries to finish.

Solution: You configure the Resource Governor to limit the amount of memory, CPU, and IOPS used for the pool of all queries that the Reporting_user login can run concurrently.

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

SQL Server Resource Governor is a feature than you can use to manage SQL Server workload and system resource consumption. Resource Governor enables you to specify limits on the amount of CPU, physical IO, and memory that incoming application requests can use.

References: https://msdn.microsoft.com/en-us/library/bb933866.aspx

Question #30

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.

You have a mission-critical application that stores data in a Microsoft SQL Server instance. The application runs several financial reports. The reports use a SQL Server-authenticated login named Reporting_User. All queries that write data to the database use Windows authentication.

Users report that the queries used to provide data for the financial reports take a long time to complete. The queries consume the majority of CPU and memory resources on the database server. As a result, read-write queries for the application also take a long time to complete.

You need to improve performance of the application while still allowing the report queries to finish.

Solution: You configure the Resource Governor to set the MAXDOP parameter to 0 for all queries against the database.

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

SQL Server will consider parallel execution plans for queries, index data definition language (DDL) operations, and static and keyset-driven cursor population.

You can override the max degree of parallelism value in queries by specifying the MAXDOP query hint in the query statement.

References: https://technet.microsoft.com/en-us/library/ms181007(v=sql.105).aspx

Question #31

Topic 4, automobile parts Case Study 1

Background

You manage the Microsoft SQL Server environment for a company that manufactures and sells automobile parts.

The environment includes the following servers: SRV1 and SRV2. SRV1 has 16 logical cores and hosts a SQL Server instance that supports a mission-critical application. The application has approximately 30,000 concurrent users and relies heavily on the use of temporary tables.

The environment also includes the following databases: DB1, DB2, and Reporting. The Reporting database is protected with Transparent Data Encryption (TDE). You plan to migrate this database to a new server. You detach the database and copy it to the new server.

You are performing tuning on a SQL Server database instance. The application which uses the database was written using an object relationship mapping (ORM) tool which maps tables as objects within the application code. There are 30 stored procedures that are regularly used by the application.

HOTSPOT

You need to optimize SRV1.

What configuration changes should you implement? To answer, select the appropriate option from each list in the answer area.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

From the scenario: SRV1 has 16 logical cores and hosts a SQL Server instance that supports a mission-critical application. The application hasapproximately 30,000 concurrent users and relies heavily on the use of temporary tables.

Box 1: Change the size of the tempdb log file.

The size and physical placement of the tempdb database can affect the performance of a system. For example, if the size that is defined for tempdb is too small, part of the system-processing load may be taken up with autogrowing tempdb to the size required to support the workload every time you restart the instance of SQL Server. You can avoid this overhead by increasing the sizes of the tempdb data and log file.

Box 2: Add additional tempdb files.

Create as many files as needed to maximize disk bandwidth. Using multiple files reduces tempdb storage contention and yields significantly better scalability.

However, do not create too many files because this can reduce performance and increase management overhead. As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary.


Question #32

HOTSPOT

You need to resolve the identified issues.

Use the drop-down menus to select the answer choice that answers each question based on the information presented in the graphic.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

From exhibit we see:

Cost Threshold of Parallelism: 5

Optimize for Ad Hoc Workloads: false

Max Degree of Parallelism: 0 (This is the default setting, which enables the server to determine the maximum degree of parallelism. It is fine.)

Locks: 0

Query Wait: -1

Box 1: Optimize for Ad Hoc Workload

Change the Optimize for Ad Hoc Workload setting from false to 1/True.

The optimize for ad hoc workloads option is used to improve the efficiency of the plan cache for workloads that contain many single use ad hoc batches. When this option is set to 1, the Database Engine stores a small compiled plan stub in the plan cache when a batch is compiled for the first time, instead of the full compiled plan. This helps to relieve memory pressure by not allowing the plan cache to become filled with compiled plans that are not reused.


Question #33

Topic 5, Contoso, Ltd Case Study 2

Background

You are the database administrator for Contoso, Ltd. The company has 200 offices around the world. The company has corporate executives that are located in offices in London, New York, Toronto, Sydney, and Tokyo.

Contoso, Ltd. has a Microsoft Azure SQL Database environment. You plan to deploy a new Azure SQL Database to support a variety of mobile applications and public websites.

The company is deploying a multi-tenant environment. The environment will host Azure SQL Database instances. The company plans to make the instances available to internal departments and partner companies. Contoso is in the final stages of setting up networking and communications for the environment.

Existing Contoso and Customer instances need to be migrated to Azure virtual machines (VM) according to the following requirements:

The company plans to deploy a new order entry application and a new business intelligence and analysis application. Each application will be supported by a new database. Contoso creates a new Azure SQL database named Reporting. The database will be used to support the company’s financial reporting requirements. You associate the database with the Contoso Azure Active Directory domain.

Each location database for the data entry application may have an unpredictable amount of activity. Data must be replicated to secondary databases in Azure datacenters in different regions.

To support the application, you need to create a database named contosodb1 in the existing environment.

Objects

Database

The contosodb1 database must support the following requirements:

Application

For the business intelligence application, corporate executives must be able to view all data in near real-time with low network latency.

Contoso has the following security, networking, and communications requirements:

HOTSPOT

You need to configure the data entry and business intelligence databases.

In the table below, identify the option that you must use for each database. NOTE: Make only one selection in each column.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Data Entry: Geo-replicated database only

From Contoso scenario: Each location database for the data entry application may have an unpredictable amount of activity. Data must be replicated to secondary databases in Azure datacenters in different regions.

Business intelligence: Elastic database pools only

From Contoso scenario: For the business intelligence application, corporate executives must be able to view all data in near real-time with low network latency.

SQL DB elastic pools provide a simple cost effective solution to manage the performance goals for multiple databases that have widely varying and unpredictable usage patterns.

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-pool


Question #34

HOTSPOT

You need to create the contosodb1 database.

How should you complete the Azure PowerShell command? To answer, select the appropriate Azure PowerShell segments in the answer area.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: New-AzureRmSqlDatabase

New-AzureRmSqlDatabase creates a database or an elastic database.

New-AzureRmSqlDatabase is a command with the Azure Resource Manager (AzureRM) module.Azure Resource Manager enables you to work with the resources in your solution as a group.


Question #35

Topic 6, SQL Server Reporting

Background

You manage a Microsoft SQL Server environment that includes the following databases: DB1, DB2, Reporting.

The environment also includes SQL Reporting Services (SSRS) and SQL Server Analysis Services (SSAS). All SSRS and SSAS servers use named instances. You configure a firewall rule for SSAS.

Databases

Database Name:

DB1

Notes:

This database was migrated from SQL Server 2012 to SQL Server 2016. Thousands of records are inserted into DB1 or updated each second. Inserts are made by many different external applications that your company’s developers do not control. You observe that transaction log write latency is a bottleneck in performance. Because of the transient nature of all the data in this database, the business can tolerate some data loss in the event of a server shutdown.

Database Name:

DB2

Notes:

This database was migrated from SQL Server 2012 to SQL Server 2016. Thousands of records are updated or inserted per second. You observe that the WRITELOG wait type is the highest aggregated wait type. Most writes must have no tolerance for data loss in the event of a server shutdown. The business has identified certain write queries where data loss is tolerable in the event of a server shutdown.

Database Name:

Reporting

Notes:

You create a SQL Server-authenticated login named BIAppUser on the SQL Server instance to support users of the Reporting database. The BIAppUser login is not a member of the sysadmin role.

You plan to configure performance-monitoring alerts for this instance by using SQL Agent Alerts.

DRAG DROP

You create a login named BIAppUser. The login must be able to access the Reporting database.

You need to grant access to the BIAppUser login in the database.

How should you complete the Transact-SQL statements? To answer, drag the appropriate Transact-SQL segments to the correct locations. Each Transact-SQL segment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Reporting

The user is to be created in the Reporting database.

Box 2: CREATE USER

Box 3: FOR LOGIN [BIAppUser]

Users are created per database and are associated with logins. You must be connected to the database in where you want to create the user. Here is some sample Transact-SQL that creates a user:

CREATE USER readonlyuser FROM LOGIN readonlylogin;

References: https://azure.microsoft.com/en-us/blog/adding-users-to-your-sql-azure-database/


Question #36

HOTSPOT

You need to open the firewall ports for use with SQL Server environment.

In table below, identify the firewall port that you must use for each service. NOTE: Make only one selection in each column.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Report Server: 80

By default, the report server listens for HTTP requests on port 80.


Question #37

HOTSPOT

You need to maximize performance of writes to each database without requiring changes to existing database tables.

In the table below, identify the database setting that you must configure for each database.

NOTE: Make only one selection in each column. Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

DB1: DELAYED_DURABILITY=FORCED

From scenario: Thousands of records are inserted into DB1 or updated each second. Inserts are made by many different external applications that your company’s developers do not control. You observe that transaction log write latency is a bottleneck in performance. Because of the transient nature of all the data in this database, the business can tolerate some data loss in the event of a server shutdown.

With the DELAYED_DURABILITY=FORCED setting, every transaction that commits on the database is delayed durable.

With the DELAYED_DURABILITY= ALLOWED setting, each transaction’s durability is determined at the transaction level.

Note: Delayed transaction durability reduces both latency and contention within the system because:

* The transaction commit processing does not wait for log IO to finish and return control to the client.

* Concurrent transactions are less likely to contend for log IO; instead, the log buffer can be flushed to disk in larger chunks, reducing contention, and increasing throughput.

DB2: ALLOW_SNAPSHOT_ISOLATION ON and READ_COMMITTED_SNAPSHOT ON

Snapshot isolation enhances concurrency for OLTP applications.

Snapshot isolation must be enabled by setting the ALLOW_SNAPSHOT_ISOLATION ON database option before it is used in transactions.

The following statements activate snapshot isolation and replace the default READ COMMITTED behavior with SNAPSHOT:

ALTER DATABASE MyDatabase

SET ALLOW_SNAPSHOT_ISOLATION ON

ALTER DATABASE MyDatabase

SET READ_COMMITTED_SNAPSHOT ON

Setting the READ_COMMITTED_SNAPSHOT ON option allows access to versioned rows under the default READ COMMITTED isolation level.

From scenario: The DB2 database was migrated from SQLServer 2012 to SQL Server 2016. Thousands of records are updated or inserted per second. You observe that the WRITELOG wait type is the highest aggregated wait type. Most writes must have no tolerance for data loss in the event of a server shutdown. The business has identified certain write queries where data loss is tolerable in the event of a server shutdown.

References:

https://msdn.microsoft.com/en-us/library/dn449490.aspx

https://msdn.microsoft.com/en-us/library/tcbchxcb(v=vs.110).aspx


Question #38

HOTSPOT

You need to set up the service accounts that the database engine and SQL Server Agent services will use.

How should you design the solution? To answer, select the appropriate configuration options in the answer area.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Domain Account

The service startup account defines the Microsoft Windows account in which SQL Server Agent runs and its network permissions. SQL Server Agent runs as a specified user account.

You select an account for the SQL Server Agent service by using SQL Server Configuration Manager, where you can choose fromthe following options:

* Built-in account. You can choose from a list of the following built-in Windows service accounts: Local System account.

* This account. Lets you specify the Windows domain account in which the SQL Server Agent service runs.

Box2: Domain users

Microsoft recommends choosing a Windows user account that is not a member of the Windows Administrators group.

Box 3: Managed Service Accounts

When resources external to the SQL Server computer are needed, Microsoft recommends using a Managed Service Account (MSA), configured with the minimum privileges necessary.

Note: A Managed Service Account (MSA) can run services on a computer in a secure and easy to maintain manner, while maintaining the capability to connect to network resources as a specific user principal.

References: https://msdn.microsoft.com/en-us/library/ms191543.aspx


Question #39

Topic 7, Mix Questions Set

You administer a Microsoft SQL Server 2014 server. One of the databases on the server supports a highly active OLTP application.

Users report abnormally long wait times when they submit data into the application.

You need to identify which queries are taking longer than 1 second to run over an extended period of time.

What should you do?

  • A . use SQL Profiler to trace all queries that are processing on the server. Filter queries that have a Duration value of more than 1,000.
  • B . Use sp_configure to set a value for blocked process threshold. Create an extended event session.
  • C . Use the Job Activity monitor to review all processes that are actively running. Review the Job History to find out the duration of each step.
  • D . Run the sp_who command from a query window.
  • E . Run the DBCC TRACEON 1222 command from a query window and review the SQL Server event log.

Reveal Solution Hide Solution

Correct Answer: A
Question #40

You administer a Microsoft SQL Server 2014 database.

You need to ensure that the size of the transaction log file does not exceed 2 GB.

What should you do?

  • A . Execute sp_configure ‘max log size’, 2G.
  • B . use the ALTER DATABASE…SET LOGFILE command along with the maxsize parameter.
  • C . In SQL Server Management Studio, right-click the instance and select Database Settings. Set the maximum size of the file for the transaction log.
  • D . in SQL Server Management Studio, right-click the database, select Properties, and then click Files. Open the Transaction log Autogrowth window and set the maximum size of the file.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

You can use the ALTER DATABASE (Transact-SQL) statement to manage the growth of a transaction log file

To control the maximum the size of a log file in KB, MB, GB, and TB units or to set growth to UNLIMITED, use the MAXSIZE option.

However, there is no SET LOGFILE subcommand.

References: https://technet.microsoft.com/en-us/library/ms365418(v=sql.110).aspx#ControlGrowth

Question #41

You administer a Microsoft SQL Server 2014 server. The MSSQLSERVER service uses a domain account named CONTOSOSQLService.

You plan to configure Instant File Initialization.

You need to ensure that Data File Autogrow operations use Instant File Initialization.

What should you do? Choose all that apply.

  • A . Restart the SQL Server Agent Service.
  • B . Disable snapshot isolation.
  • C . Restart the SQL Server Service.
  • D . Add the CONTOSOSQLService account to the Perform Volume Maintenance Tasks local security policy.
  • E . Add the CONTOSOSQLService account to the Server Operators fixed server role.
  • F . Enable snapshot isolation.

Reveal Solution Hide Solution

Correct Answer: C,D
C,D

Explanation:

How To Enable Instant File Initialization

References:

http://msdn.microsoft.com/en-us/library/ms175935.aspx

Question #42

You administer a Microsoft SQL Server 2014 failover cluster that contains two nodes named Node A and Node B. A single instance of SQL Server is installed on the cluster.

An additional node named Node C has been added to the existing cluster.

You need to ensure that the SQL Server instance can use all nodes of the cluster.

What should you do?

  • A . Run the New SQL Server stand-alone installation Wizard on Node C.
  • B . Run the Add Node to SQL Server Failover Cluster Wizard on Node C.
  • C . Use Node B to install SQL Server on Node C.
  • D . Use Node A to install SQL Server on Node C.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

To add a node to an existing SQL Server failover cluster, you must run SQL Server Setup on the node that is to be added to the SQL Server failover cluster instance. Do not run Setup on the active node.

The Installation Wizard will launch the SQL Server Installation Center. To add a node to an existing failover cluster instance, click Installation in the left-hand pane. Then, select Add node to a SQL Server failover cluster.

References: http://technet.microsoft.com/en-us/library/ms191545.aspx

Question #43

You administer a Microsoft SQL Server 2014 database.

The database contains a Product table created by using the following definition:

You need to ensure that the minimum amount of disk space is used to store the data in the Product table.

What should you do?

  • A . Convert all indexes to Column Store indexes.
  • B . Implement Unicode Compression.
  • C . Implement row-level compression.
  • D . Implement page-level compression.

Reveal Solution Hide Solution

Correct Answer: D
Question #44

You administer a Microsoft SQL Server 2014 instance. After a routine shutdown, the drive that contains tempdb fails.

You need to be able to start the SQL Server.

What should you do?

  • A . Modify tempdb location in startup parameters.
  • B . Start SQL Server in minimal configuration mode.
  • C . Start SQL Server in single-user mode.
  • D . Configure SQL Server to bypass Windows application logging.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

If you have configuration problems that prevent the server from starting, you can start an instance of Microsoft SQL Server by using the minimal configuration startup option.

When you start an instance of SQL Server in minimal configuration mode, note the following:

Only a single user can connect, and the CHECKPOINT process is not executed.

Remote access and read-ahead are disabled.

Startup stored procedures do not run.

tempdb is configured at the smallest possible size.

References: https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/start-sql-server-with-minimal-configuration

Question #45

You administer a single server that contains a Microsoft SQL Server 2014 default instance. You plan to install a new application that requires the deployment of a database on the server. The application login requires sysadmin permissions.

You need to ensure that the application login is unable to access other production databases.

What should you do?

  • A . Use the SQL Server default instance and configure an affinity mask.
  • B . Install a new named SQL Server instance on the server.
  • C . Use the SQL Server default instance and enable Contained Databases.
  • D . Install a new default SQL Server instance on the server.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

References: https://docs.microsoft.com/en-us/sql/sql-server/install/work-with-multiple-versions-and-instances-of-sql-server

Question #46

You administer a Microsoft SQL Server 2014 Enterprise Edition server that uses 64 cores.

You discover performance issues when large amounts of data are written to tables under heavy system load.

You need to limit the number of cores that handle I/O.

What should you configure?

  • A . Processor affinity
  • B . Lightweight pooling
  • C . Max worker threads
  • D . I/O affinity

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The affinity Input-Output (I/O) mask Server Configuration Option.

To carry out multitasking, Microsoft Windows 2000 and Windows Server 2003 sometimes move process threads among different processors. Although efficient from an operating system point of view, this activity can reduce Microsoft SQL Server performance under heavy system loads, as each processor cache is repeatedly reloaded with data. Assigning processors to specific threads can improve performance under these conditions by eliminating processor reloads; such an association between a thread and a processor is called processor affinity.

References:

http://msdn.microsoft.com/en-us/library/ms189629.aspx

Question #47

You administer a Microsoft SQL Server 2014 instance that contains a financial database hosted on a storage area network (SAN).

The financial database has the following characteristics:

The database is continually modified by users during business hours from Monday through Friday between 09:00 hours and 17:00 hours. Five percent of the existing data is modified each day.

The Finance department loads large CSV files into a number of tables each business day at 11:15 hours and 15:15 hours by using the BCP or BULK INSERT commands. Each data load adds 3 GB of data to the database.

These data load operations must occur in the minimum amount of time.

A full database backup is performed every Sunday at 10:00 hours. Backup operations will be performed every two hours (11:00, 13:00, 15:00, and 17:00) during business hours.

You need to ensure that your backup will continue if any invalid checksum is encountered.

Which backup option should you use?

  • A . STANDBY
  • B . Differential
  • C . FULL
  • D . CHECKSUM
  • E . BULK_LOGGED
  • F . CONTINUE_AFTER_ERROR
  • G . SIMPLE
  • H . DBO_ONLY
  • I . COPY_ONLY
  • J . SKIP
  • K . RESTART
  • L . Transaction log
  • M . NO_CHECKSUM
  • N . NORECOVERY

Reveal Solution Hide Solution

Correct Answer: F
F

Explanation:

The CONTINUE_AFTER_ERROR option, of the Transact-SQL BACKUP command, instructs BACKUP to continue despite encountering errors such as invalid checksums or torn pages.

References:

https://docs.microsoft.com/en-us/sql/t-sql/statements/backup-transact-sql

Question #48

You administer a Microsoft SQL Server 2014 instance that contains a financial database hosted on a storage area network (SAN).

The financial database has the following characteristics:

The database is continually modified by users during business hours from Monday through Friday between 09:00 hours and 17:00 hours. Five percent of the existing data is modified each day.

The Finance department loads large CSV files into a number of tables each business day at 11:15 hours and 15:15 hours by using the BCP or BULK INSERT commands. Each data load adds 3 GB of data to the database.

These data load operations must occur in the minimum amount of time.

A full database backup is performed every Sunday at 10:00 hours. Backup operations will be performed every two hours (11:00, 13:00, 15:00, and 17:00) during business hours.

On Wednesday at 10:00 hours, the development team requests you to refresh the database on a development server by using the most recent version.

You need to perform a full database backup that will be restored on the development server.

Which backup option should you use?

  • A . NORECOVERY
  • B . FULL
  • C . NO_CHECKSUM
  • D . CHECKSUM
  • E . Differential
  • F . BULK_LOGGED
  • G . STANDBY
  • H . RESTART
  • I . SKIP
  • J . Transaction log
  • K . DBO ONLY
  • L . COPY_ONLY
  • M . SIMPLE
  • N . CONTINUE AFTER ERROR

Reveal Solution Hide Solution

Correct Answer: L
L

Explanation:

COPY_ONLY specifies that the backup is a copy-only backup, which does not affect the normal sequence of backups. A copy-only backup is created independently of your regularly scheduled, conventional backups. A copy-only backup does not affect your overall backup and restore procedures for the database.

References:

https://docs.microsoft.com/en-us/sql/t-sql/statements/backup-transact-sql

Question #49

You administer a Microsoft SQL Server 2014 instance that contains a financial database hosted on a storage area network (SAN).

The financial database has the following characteristics:

The database is continually modified by users during business hours from Monday through Friday between 09:00 hours and 17:00 hours. Five percent of the existing data is modified each day.

The Finance department loads large CSV files into a number of tables each business day at 11:15 hours and 15:15 hours by using the BCP or BULK INSERT commands. Each data load adds 3 GB of data to the database.

These data load operations must occur in the minimum amount of time.

A full database backup is performed every Sunday at 10:00 hours. Backup operations will be performed every two hours (11:00, 13:00, 15:00, and 17:00) during business hours.

You need to ensure that the minimum amount of data is lost.

Which recovery model should the database use?

  • A . NORECOVERY
  • B . FULL
  • C . NO_CHECKSUM
  • D . CHECKSUM
  • E . Differential
  • F . BULK_LOGGED
  • G . STANDBY
  • H . RESTART
  • I . SKIP
  • J . Transaction log
  • K . DBO ONLY
  • L . COPY_ONLY
  • M . SIMPLE
  • N . CONTINUE AFTER ERROR

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The full recovery model requires log backups. No work is lost due to a lost or damaged data file. Can recover to a specific point in time, assuming that your backups are complete up to that point in time.

Question #50

You administer a Microsoft SQL Server 2014 instance that contains a financial database hosted on a storage area network (SAN).

The financial database has the following characteristics:

The database is continually modified by users during business hours from Monday through Friday between 09:00 hours and 17:00 hours. Five percent of the existing data is modified each day.

The Finance department loads large CSV files into a number of tables each business day at 11:15 hours and 15:15 hours by using the BCP or BULK INSERT commands. Each data load adds 3 GB of data to the database.

These data load operations must occur in the minimum amount of time.

A full database backup is performed every Sunday at 10:00 hours. Backup operations will be performed every two hours (11:00, 13:00, 15:00, and 17:00) during business hours.

You need to ensure that the backup size is as small as possible.

Which backup should you perform every two hours?

  • A . NORECOVERY
  • B . FULL
  • C . NO_CHECKSUM
  • D . CHECKSUM
  • E . Differential
  • F . BULK_LOGGED
  • G . STANDBY
  • H . RESTART
  • I . SKIP
  • J . Transaction log
  • K . DBO ONLY
  • L . COPY_ONLY
  • M . SIMPLE
  • N . CONTINUE AFTER ERROR

Reveal Solution Hide Solution

Correct Answer: J
J

Explanation:

Minimally, you must have created at least one full backup before you can create any log backups. After that, the transaction log can be backed up at any time unless the log is already being backed up.

References: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/transaction-log-backups-sql-server

Question #51

You administer a Microsoft SQL Server 2014 instance named SQL2012 that hosts an OLTP database of 1 terabyte in size.

The database is modified by users only from Monday through Friday from 09:00 hours to 17:00 hours. Users modify more than 30 percent of the data in the database during the week.

Backups are performed as shown in the following schedule:

The Finance department plans to execute a batch process every Saturday at 09:00 hours. This batch process will take a maximum of 8 hours to complete.

The batch process will update three tables that are 10 GB in size. The batch process will update these tables multiple times.

When the batch process completes, the Finance department runs a report to find out whether the batch process has completed correctly.

You need to ensure that if the Finance department disapproves the batch process, the batch operation can be rolled back in the minimum amount of time.

What should you do on Saturday?

  • A . Perform a differential backup at 08:59 hours.
  • B . Record the LSN of the transaction log at 08:59 hours. Perform a transaction log backup at17:01 hours.
  • C . Create a database snapshot at 08:59 hours.
  • D . Record the LSN of the transaction log at 08:59 hours. Perform a transaction log backup at08:59 hours.
  • E . Create a marked transaction in the transaction log at 08:59 hours. Perform a transaction log backup at 17:01 hours.
  • F . Create a marked transaction in the transaction log at 08:59 hours. Perform a transaction log backup at 08:59 hours.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

References: https://docs.microsoft.com/en-us/sql/relational-databases/databases/database-snapshots-sql-server

Question #52

You administer a Microsoft SQL Server 2014 instance.

The instance contains a database that supports a retail sales application. The application generates hundreds of transactions per second and is online 24 hours per day and 7 days per week.

You plan to define a backup strategy for the database. You need to ensure that the following requirements are met:

No more than 5 minutes worth of transactions are lost. Data can be recovered by using the minimum amount of administrative effort.

What should you do? Choose all that apply.

  • A . Configure the database to use the SIMPLE recovery model.
  • B . Create a DIFFERENTIAL database backup every 4 hours.
  • C . Create a LOG backup every 5 minutes.
  • D . Configure the database to use the FULL recovery model.
  • E . Create a FULL database backup every 24 hours.
  • F . Create a DIFFERENTIAL database backup every 24 hours.

Reveal Solution Hide Solution

Correct Answer: B,C,D,E
B,C,D,E

Explanation:

The full recovery model uses log backups to prevent data loss in the broadest range of failure scenarios, and backing and restoring the transaction log (log backups) is required. The advantage of using log backups is that they let you restore a database to any point of time that is contained within a log backup (point-in-time recovery). You can use a series of log backups to roll a database forward to any point in time that is contained in one of the log backups. Be aware that to minimize your restore time, you can supplement each full backup with a series of differential backups of the same data.

References: https://technet.microsoft.com/en-us/library/ms190217(v=sql.105).aspx

Exit mobile version