Exam4Training

The company’s IT strategy is to adopt innovative and emerging technologies such as software-defined storage solution. The IT team has decided to run their business-critical workloads on an all-flash Virtual SAN (vSAN) as it provides excellent performance.

The company’s IT strategy is to adopt innovative and emerging technologies such as software-defined storage solution. The IT team has decided to run their business-critical workloads on an all-flash Virtual SAN (vSAN) as it provides excellent performance.

The IT team has purchased servers that are compatible with vSAN. However, all the solid-state drives (SSD) in the servers are shown incorrectly as hard-disk drives (HDD) instead.

In addition, some of the solid-state drives (SSD) will be used for other purposes instead of vSAN and should not be part of the vSAN cluster.

These are the requirements for the vSAN cluster:

• In each server, use the 3GB SSD as the cache tier and the 11GB SSD as the capacity tier

• As a result the vSAN cluster will use a total of six SSDs (three SSDs for caching and three SSDs for capacity

• Ensure all the disks that will be used for vSAN are shown correctly as SSDs

• Provide storage savings by using deduplication and compression.

Next, the IT team wants to improve the performance and availability of the business-critical workloads on the vSAN-datastore.

Ensure the following configurations will be applied on existing and new workloads located on vSAN-datastore:

Number of disk stripes per object: 2

Primary level of failures to tolerate: 2

Failure tolerance method: RAID-1 (Mirroring)

Force provisioning; Yes

The new configurations should be applied by default.

You may create new storage policy but do not edit the default vSAN storage policy as it may be used by other vSAN clusters in the future. Name the policy "New vSAN Default’. Nate-. All tasks should be executed in PROD-A host cluster.

Answer: VMware vSphere ESXi can use locally attached SSDs (Solid State Disk) and flash devices in multiple ways. Since SSDs offer much higher throughput and much lower latency than traditional magnetic hard disks the benefits are clear. While offering lower throughput and higher latency, flash devices such as USB or SATADOM can also be appropriate for some use cases. The potential drawback to using SSDs and flash device storage is that the endurance can be significantly less than traditional magnetic disks and it can vary based on the workload type as well as factors such as the drive capacity, underlying flash technology, etc.

This KB outlines the minimum SSD and flash device recommendations based on different technologies and use case scenarios.

SSD and Flash Device Use Cases

A non-exhaustive survey of various usage models in vSphere environment are listed below. Host swap cache

This usage model has been supported since vSphere 5.1 for SATA and SCSI connected SSDs. USB and low end SATA or SCSI flash devices are not supported.

The workload is heavily influenced by the degree of host memory over commitment. Regular datastore

A (local) SSD is used instead of a hard disk drive.

This usage model has been supported since vSphere 7.0 for SATA and SCSI connected SSDs.

There is currently no support for USB connected SSDs or for low end flash devices regardless of

connection type.

vSphere Flash Read Cache (aka Virtual Flash)

This usage model has been supported since vSphere 5.5 for SATA and SCSI connected SSDs.

There is no support for USB connected SSDs or for low end flash devices.

vSAN

This usage model has been supported since vSphere 5.5 for SATA and SCSI SSDs. For more information, see the vSAN Hardware Quick Reference Guide.

vSphere ESXi Boot Disk

A USB flash drive or SATADOM or local SSD can be chosen as the install image for ESXi, the vSphere hypervisor, which then boots from the flash device.

This usage model has been supported since vSphere 3.5 for USB flash devices and vSphere 4.0 for SCSI/SATA connected devices.

Installation to SATA and SCSI connected SSD, SATADOM and flash devices creates a full install image which includes a logging partition (see below) whereas installation to a USB device creates a boot

disk image without a logging partition.

vSphere ESXi Coredump device

The default size for the coredump partition is 2.5 GiB which is about 2.7 GB and the installer creates a coredump partition on the boot device device for vSphere 5.5 and above. After installation the partition can be resized if necessary using partedUtil. For more information, see the vSphere documentation.

Any SATADOM or SATA/SCSI SSD may be configured with a coredump partition.

This usage model has been supported from vSphere 3.5 for boot USB flash devices and since vSphere 4.0 for any SATA or SCSI connected SSD that is local.

This usage model also applies to Autodeploy hosts which have no boot disk.

vSphere ESXi Logging device

A SATADOM or local SATA/SCSI SSD is chosen as the location for the vSphere logging partition

(/scratch partition). This partition may be but need not be on the boot disk and this applies to Autodeploy hosts which lack a boot disk.

This usage model has been supported since vSphere 7.0 for any SATA or SCSI connected SSD that is local. SATADOMs that meet the requirement set forth in Table 1 are also supported.

This usage model can be supported in a future release of vSphere for USB flash devices that meet the requirement set forth in Table 1.

SSD Endurance Criteria

The flash industry often uses Tera Bytes Written (TBW) as a benchmark for SSD endurance. TBW is the number of terabytes that can be written to the device over its useful life. Most devices have distinct TBW ratings for sequential and random IO workloads, with the latter being much lower due to Write Amplification Factor (WAF) (defined below). Other measures of endurance commonly used are DWPD (Drive Writes Per Day) and P/E (Program/Erase) cycles.

Conversion formulas are provided here:

Converting DWPD (Drive Writes Per Day) to TBW (Terabytes Written):

TBW = DWPD * Warranty (in Years) * 365 * Capacity (in GB) / 1,000 (GB per TB)

Converting Flash P/E Cycles per Cell to TBW (Terabytes Written):

TBW = Capacity (in GB) * (P/E Cycles per Cell) / (1,000 (GB per TB) * WAF)

WAF is a measure of the induced writes caused by inherent properties of flash technology. Due to the difference between the storage block size (512 bytes), the flash cell size (typically 4KiB or 8KiB bytes) and the minimum flash erase size of many cells one write can force a number of induced writes due to copies, garbage collection, etc. For sequential workloads typical WAFs fall in the range of single digits while for random workloads WAFs can approach or even exceed 100. Table 1 contains workload characterization for the various workloads excepting the Datastore and vSphere Flash Read Cache workloads which depend on the characteristics of the Virtual Machines workloads being run and thus cannot be characterized here. A WAF from the table can be used with the above P/E to TBW formula.

Exit mobile version