Which approach should this customer use?

An enterprise customer is migrating to Redshift and is considering using dense storage

nodes in its Redshift cluster. The customer wants to migrate 50 TB of data. The customer’s query patterns involve performing many joins with thousands of rows. The customer needs to know how many nodes are needed in its target Redshift cluster. The customer has a limited budget and needs to avoid performing tests unless absolutely needed.

Which approach should this customer use?
A . Start with many small nodes
B . Start with fewer large nodes
C . Have two separate clusters with a mix of small and large nodes
D . Insist on performing multiple tests to determine the optimal configuration

Answer: A

Explanation:

https://d1.awsstatic.com/whitepapers/Size-Cloud-Data-Warehouse-on-AWS.pdf

Using compression ratio of 3 as per the link. The 50TB/3= 16TB. The calculation 50TB/3=16.66 * (1.25) =20.83 ~21TB. 21TB/2 =10.5 ~11 ds2.xlarge nodes. The calculation 50TB/3=16.66 * (1.25) =20.83 ~21TB. 21TB/16 =1.3125 ~2 ds2.8xlarge nodes.

Latest BDS-C00 Dumps Valid Version with 264 Q&As

Latest And Valid Q&A | Instant Download | Once Fail, Full Refund

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments