What is the most cost-and time-efficient collection methodology in this situation?

A medical record filing system for a government medical fund is using an Amazon S3 bucket to archive documents related to patients. Every patient visit to a physician creates a new file, which can add up to millions of files each month. Collection of these files from each physician is handled via a batch process that runs every night using AWS Data Pipeline. This is sensitive data, so the data and any associated metadata must be encrypted at rest.

Auditors review some files on a quarterly basis to see whether the records are maintained according to regulations. Auditors must be able to locate any physical file in the S3 bucket or a given data, patient, or physician. Auditors spend a signification amount of time locating such files.

What is the most cost-and time-efficient collection methodology in this situation?
A . Use Amazon kinesis to get the data feeds directly from physician, batch them using a Spark application on Amazon Elastic MapReduce (EMR) and then store them in Amazon S3 with folders separated per physician.
B . Use Amazon API Gateway to get the data feeds directly from physicians, batch them using a Spark application on Amazon Elastic MapReduce (EMR), and then store them in Amazon S3 with folders separated per physician.
C . Use Amazon S3 event notifications to populate an Amazon DynamoDB table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file.
D . Use Amazon S3 event notifications to populate and Amazon Redshift table with metadata about every file loaded to Amazon S3, and partition them based on the month and year of the file.

Answer: C

Latest BDS-C00 Dumps Valid Version with 264 Q&As

Latest And Valid Q&A | Instant Download | Once Fail, Full Refund

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments