Which approach would allow the developers to solve the issue with minimal coding effort?

A company has developed several AWS Glue jobs to validate and transform its data from Amazon S3 and load it into Amazon RDS for MySQL in batches once every day. The ETL jobs read the S3 data using a DynamicFrame. Currently, the ETL developers are experiencing challenges in processing only the incremental data on every run, as the AWS Glue job processes all the S3 input data on each run.

Which approach would allow the developers to solve the issue with minimal coding effort?
A . Have the ETL jobs read the data from Amazon S3 using a DataFrame.
B . Enable job bookmarks on the AWS Glue jobs.
C . Create custom logic on the ETL jobs to track the processed S3 objects.
D . Have the ETL jobs delete the processed objects or data from Amazon S3 after each run.

Answer: B

Latest DAS-C01 Dumps Valid Version with 77 Q&As

Latest And Valid Q&A | Instant Download | Once Fail, Full Refund

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments