How should an organization use the Einstein Trust layer to audit, track, and view masked data?
How should an organization use the Einstein Trust layer to audit, track, and view masked data?
A . Utilize the audit trail that captures and stores all LLM submitted prompts in Data Cloud.
B . In Setup, use Prompt Builder to send a prompt to the LLM requesting for the masked data.
C . Access the audit trail in Setup and export all user-generated prompts.
Answer: A
Explanation:
The Einstein Trust Layer is designed to ensure transparency, compliance, and security for organizations leveraging Salesforce’s AI and generative AI capabilities. Specifically, for auditing, tracking, and viewing masked data, organizations can utilize:
Audit Trail in Data Cloud: The audit trail captures and stores all prompts submitted to large language models (LLMs), ensuring that sensitive or masked data interactions are logged. This allows organizations to monitor and audit all AI-generated outputs, ensuring that data handling complies with internal and regulatory guidelines. The Data Cloud provides the infrastructure for managing and accessing this audit data.
Why not B? Using Prompt Builder in Setup to send prompts to the LLM is for creating and managing prompts, not for auditing or tracking data. It does not interact directly with the audit trail functionality.
Why not C? Although the audit trail can be accessed in Setup, the user-generated prompts are primarily tracked in the Data Cloud for broader control, auditing, and analysis. Setup is not the primary tool for exporting or managing these audit logs.
More information on auditing AI interactions can be found in the Salesforce AI Trust Layer documentation, which outlines how organizations can manage and track generative AI interactions securely.
Latest Salesforce AI Specialist Dumps Valid Version with 92 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund