How should the AI practitioner prevent responses based on confidential data?

An AI practitioner trained a custom model on Amazon Bedrock by using a training dataset that contains confidential data. The AI practitioner wants to ensure that the custom model does not generate inference responses based on confidential data.

How should the AI practitioner prevent responses based on confidential data?
A . Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom model.
B . Mask the confidential data in the inference responses by using dynamic data masking.
C . Encrypt the confidential data in the inference responses by using Amazon SageMaker.
D . Encrypt the confidential data in the custom model by using AWS Key Management Service (AWS KMS).

Answer: A

Explanation:

A: Delete the custom model. Remove the confidential data from the training dataset. Retrain the custom model. Explanation: If the training dataset contains confidential data, the model may inadvertently learn and generate responses based on that data. The only way to ensure that the model does not generate responses based on the confidential data is to: Remove the confidential data from the training dataset. Retrain the custom model using the updated dataset. This process ensures that the model is not influenced by the sensitive information.

Latest AIF-C01 Dumps Valid Version with 87 Q&As

Latest And Valid Q&A | Instant Download | Once Fail, Full Refund

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments