Certification Provider: Microsoft
Exam Name: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB
Exam Code: DP-420
Official Exam Time: 120 mins
Number of questions in the Official Exam: 40-60 Q&As
Latest update time in our database: May 31,2023
DP-420 Official Exam Topics:
  • Topic1 : Choose a partitioning strategy based on a specific workload
  • Topic2 : Calculate and evaluate data distribution based on partition key selection / Evaluate the cost of the global distribution of data
  • Topic3 : Create a connection to a database / Enable offline development by using the Azure Cosmos DB emulator
  • Topic4 : Enable SDK logging / Implement data access by using Azure Cosmos DB for NoSQL SDKs
  • Topic5 : Implement a query operation by using a continuation token / Implement a user-defined function
  • Topic6 : Evaluate the impact of consistency model choices on availability and associated RU cost / Evaluate the impact of consistency model choices on performance and latency
  • Topic7 : Choose between Azure Synapse Link and Spark Connector / Integrate events with other applications by using Azure Functions and Azure Event Hubs
  • Topic8 : Aggregate data by using Change Feed and Azure Functions, including reporting / Implement Azure Cosmos DB integrated cache
  • Topic9 : Manage the number of change feed instances by using the change feed estimator / Implement aggregation persistence by using a change feed
  • Topic10 : Implement and query Azure Cosmos DB logs / Implement backup and restore for an Azure Cosmos DB solution
  • Topic11 : Manage data plane access to Azure Cosmos DB by using keys / Manage data plane access to Azure Cosmos DB by using Microsoft Azure Active Directory (Azure AD), part of Microsoft Entra
  • Topic12 : Choose a data movement strategy / Move data by using client SDK bulk operations
  • Topic13 : Provision and manage Azure Cosmos DB resources by using Azure Resource Manager templates (ARM templates) /

What should you include in the query?

You have an Azure Cosmos DB Core (SQL) API account.

You configure the diagnostic settings to send all log information to a Log Analytics workspace.

You need to identify when the provisioned request units per second (RU/s) for resources within the account were modified.

You write the following query.

AzureDiagnostics

| where Category == "ControlPlaneRequests"

What should you include in the query?
A . | where OperationName startswith "AccountUpdateStart"
B . | where OperationName startswith "SqlContainersDelete"
C . | where OperationName startswith "MongoCollectionsThroughputUpdate"
D . | where OperationName startswith "SqlContainersThroughputUpdate"

Answer: A

Explanation:

The following are the operation names in diagnostic logs for different operations:

RegionAddStart, RegionAddComplete RegionRemoveStart, RegionRemoveComplete AccountDeleteStart, AccountDeleteComplete RegionFailoverStart, RegionFailoverComplete

AccountCreateStart, AccountCreateComplete

*AccountUpdateStart*, AccountUpdateComplete

VirtualNetworkDeleteStart, VirtualNetworkDeleteComplete

DiagnosticLogUpdateStart, DiagnosticLogUpdateComplete

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/audit-control-plane-logs

Which action can address the application updates issue without affecting the functionality of the application?

You are troubleshooting the current issues caused by the application updates.

Which action can address the application updates issue without affecting the functionality of the application?
A . Enable time to live for the con-product container.
B . Set the default consistency level of account1 to strong.
C . Set the default consistency level of account1 to bounded staleness.
D . Add a custom indexing policy to the con-product container.

Answer: C

Explanation:

Bounded staleness is frequently chosen by globally distributed applications that expect low write latencies but require total global order guarantee. Bounded staleness is great for applications featuring group collaboration and sharing, stock ticker, publish-subscribe/queueing etc.

Scenario: Application updates in con-product frequently cause HTTP status code 429 "Too many requests". You discover that the 429 status code relates to excessive request unit (RU) consumption during the updates.

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

Which two actions should you perform?

You need to provide a solution for the Azure Functions notifications following updates to con-product. The solution must meet the business requirements and the product catalog requirements.

Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A . Configure the trigger for each function to use a different leaseCollectionPrefix
B . Configure the trigger for each function to use the same leaseCollectionNair.e
C . Configure the trigger for each function to use a different leaseCollectionName
D . Configure the trigger for each function to use the same leaseCollectionPrefix

Answer: A,B

Explanation:

leaseCollectionPrefix: when set, the value is added as a prefix to the leases created in the Lease collection for this Function. Using a prefix allows two separate Azure Functions to share the same Lease collection by using different prefixes.

Scenario: Use Azure Functions to send notifications about product updates to different recipients.

Trigger the execution of two Azure functions following every update to any document in the con-product container.

Reference: https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger

What should you do?

You configure multi-region writes for account1.

You need to ensure that App1 supports the new configuration for account1. The solution must meet the business requirements and the product catalog requirements.

What should you do?
A . Set the default consistency level of accountl to bounded staleness.
B . Create a private endpoint connection.
C . Modify the connection policy of App1.
D . Increase the number of request units per second (RU/s) allocated to the con-product and con-productVendor containers.

Answer: D

Explanation:

App1 queries the con-product and con-productVendor containers.

Note: Request unit is a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB.

Scenario:

Develop an app named App1 that will run from all locations and query the data in account1. Once multi-region writes are configured, maximize the performance of App1 queries against the data in account1.

Whenever there are multiple solutions for a requirement, select the solution that provides the best performance, as long as there are no additional costs associated.

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

What should you configure for each container?

730610 and longitude: -73.935242.

Administrative effort must be minimized to implement the solution.

What should you configure for each container? To answer, drag the appropriate configurations to the correct containers. Each configuration may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Answer:

Explanation:

Box 1: Last Write Wins (LWW) (default) mode

Last Write Wins (LWW): This resolution policy, by default, uses a system-defined timestamp property. It’s based on the time-synchronization clock protocol. Box 2: Merge Procedures (custom) mode

Custom: This resolution policy is designed for application-defined semantics for reconciliation of conflicts. When you set this policy on your Azure Cosmos container, you also need to register a merge stored procedure. This procedure is automatically invoked when conflicts are detected under a database transaction on the server. The system provides exactly once guarantee for the execution of a merge procedure as part of the commitment protocol.

What should you select?

You need to select the partition key for con-iot1. The solution must meet the IoT telemetry requirements.

What should you select?
A . the timestamp
B . the humidity
C . the temperature
D . the device ID

Answer: D

Explanation:

The partition key is what will determine how data is routed in the various partitions by Cosmos DB and needs to make sense in the context of your specific scenario. The IoT Device ID is generally the "natural" partition key for IoT applications.

Scenario: The iotdb database will contain two containers named con-iot1 and con-iot2.

Ensure that Azure Cosmos DB costs for IoT-related processing are predictable.

Reference: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/iot-using-cosmos-db

Which three permissions should you enable in the access policy?

You plan to create an Azure Cosmos DB Core (SQL) API account that will use customer-managed keys stored in Azure Key Vault.

You need to configure an access policy in Key Vault to allow Azure Cosmos DB access to the keys.

Which three permissions should you enable in the access policy? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A . Wrap Key
B . Get
C . List
D . Update
E . Sign
F . Verify
G . Unwrap Key

Answer: A,B,G

Explanation:

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-cmk

Which three configuration items should you include in the solution?

Topic 2, Misc. Questions

You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API account. The data from a container named telemetry must be added to a Kafka topic named iot. The solution must store the data in a compact binary format.

Which three configuration items should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A . "connector.class":
"com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector"
B . "key.converter": "org.apache.kafka.connect.json.JsonConverter"
C . "key.converter": "io.confluent.connect.avro.AvroConverter"
D . "connect.cosmos.containers.topicmap": "iot#telemetry"
E . "connect.cosmos.containers.topicmap": "iot"
F . "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector"

Answer: C,D,F

Explanation:

C: Avro is binary format, while JSON is text.

F: Kafka Connect for Azure Cosmos DB is a connector to read from and write data to Azure Cosmos DB. The Azure Cosmos DB sink connector allows you to export data from Apache Kafka topics to an Azure Cosmos DB database. The connector polls data from Kafka to write to containers in the database based on the topics subscription.

D: Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for the sink connector.

Extract:

"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",

"key.converter": "org.apache.kafka.connect.json.AvroConverter"

"connect.cosmos.containers.topicmap": "hotels#kafka"

Reference:

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/kafka-connector-sink

https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/

Which three actions should you perform in sequence?

DRAG DROP

You have an app that stores data in an Azure Cosmos DB Core (SQL) API account The app performs queries that return large result sets.

You need to return a complete result set to the app by using pagination. Each page of results must return 80 items.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Answer:

Explanation:

Step 1: Configure the MaxItemCount in QueryRequestOptions

You can specify the maximum number of items returned by a query by setting the MaxItemCount. The MaxItemCount is specified per request and tells the query engine to return that number of items or fewer.

Box 2: Run the query and provide a continuation token

In the .NET SDK and Java SDK you can optionally use continuation tokens as a bookmark for your query’s progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time using the continuation token.

If the query returns a continuation token, then there are additional query results. Step 3: Append the results to a variable

Which connectivity mode should you identify?

Topic 1, Litware, inc

Case Study

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview

Litware, Inc. is a United States-based grocery retailer. Litware has a main office and a primary datacenter in Seattle. The company has 50 retail stores across the United States and an emerging online presence. Each store connects directly to the internet.

Existing environment. Cloud and Data Service Environments.

Litware has an Azure subscription that contains the resources shown in the following table.

Each container in productdb is configured for manual throughput.

The con-product container stores the company’s product catalog data. Each document in con-product includes a con-productvendor value. Most queries targeting the data in con-product are in the following format.

SELECT * FROM con-product p WHERE p.con-productVendor – ‘name’

Most queries targeting the data in the con-productVendor container are in the following format

SELECT * FROM con-productVendor pv

ORDER BY pv.creditRating, pv.yearFounded

Existing environment. Current Problems.

Litware identifies the following issues:

Updates to product categories in the con-productVendor container do not propagate automatically to documents in the con-product container.

Application updates in con-product frequently cause HTTP status code 429 "Too many requests". You discover that the 429 status code relates to excessive request unit (RU) consumption during the updates.

Requirements. Planned Changes

Litware plans to implement a new Azure Cosmos DB Core (SQL) API account named account2 that will contain a database named iotdb. The iotdb database will contain two containers named con-iot1 and con-iot2.

Litware plans to make the following changes:

Store the telemetry data in account2.

Configure account1 to support multiple read-write regions.

Implement referential integrity for the con-product container.

Use Azure Functions to send notifications about product updates to different recipients. Develop an app named App1 that will run from all locations and query the data in account1. Develop an app named App2 that will run from the retail stores and query the data in account2. App2 must be limited to a single DNS endpoint when accessing account2.

Requirements. Business Requirements

Litware identifies the following business requirements:

Whenever there are multiple solutions for a requirement, select the solution that provides the best performance, as long as there are no additional costs associated. Ensure that Azure Cosmos DB costs for IoT-related processing are predictable.

Minimize the number of firewall changes in the retail stores.

Requirements. Product Catalog Requirements

Litware identifies the following requirements for the product catalog:

Implement a custom conflict resolution policy for the product catalog data.

Minimize the frequency of errors during updates of the con-product container.

Once multi-region writes are configured, maximize the performance of App1 queries against the data in account1.

Trigger the execution of two Azure functions following every update to any document in the con-product container.

You need to identify which connectivity mode to use when implementing App2. The solution must support the planned changes and meet the business requirements.

Which connectivity mode should you identify?
A . Direct mode over HTTPS
B . Gateway mode (using HTTPS)
C . Direct mode over TCP

Answer: C

Explanation:

Scenario: Develop an app named App2 that will run from the retail stores and query the data in account2. App2 must be limited to a single DNS endpoint when accessing account2.

By using Azure Private Link, you can connect to an Azure Cosmos account via a private endpoint. The private endpoint is a set of private IP addresses in a subnet within your virtual network.

When you’re using Private Link with an Azure Cosmos account through a direct mode connection, you can use only the TCP protocol. The HTTP protocol is not currently supported.

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-private-endpoints