Category Archives: VMware

  • 0

VSAN in the Home Lab – Part 2 (Benchmarks)

***EDIT***

The OCD in me couldn’t go through this blog post with an ‘old’ version of VSAN.  VMware just released VSAN 2.0 with vSphere 6 update 2.  My vCenter and the ESXI hosts have now been updated.   For a nice blog on what’s new, you can read this – (http://www.yellow-bricks.com/2016/02/10/whats-new-for-virtual-san-6-2/)

***********

In Part 1 of this series I went through Enabling and Configuring VSAN.  Now that we have an operational Datastore, we need to figure out if it’s actually usable.  Don’t get me wrong, I’m sure it’s functional and operational, but don’t we want to see how this bad boy performs?  Just to refresh your memory we are running 1x HDD and 1xSSD on each host in a 3x host configuration.  Obviously if performance is your main goal, you should make the disk group 100% SSD based.

The past couple years I’ve used a few 3rd party tools to diagnose/benchmark performance.  For this blog post I’ll be taking a look at Iometer, VSAN Performance stats (integrated with VSAN), and CrystalDiskMark.  I’m all ears if anyone has any suggestions on additional programs (free) or specific tweaks with the current tools for better real world scenarios.  I do understand that we could tweak and try different tests for the next year.  So hopefully these couple of tests provide us a simple ‘overview’.

One thing to note: The performance numbers in my lab may be different than the performance numbers in your lab/environment.  This is due to differences in hardware, disk type, storage, etc…  So, what I’m trying to say is, don’t shoot the messenger.

Test Scenario:

  • Windows 8.1 Virtual Machine
    • patched
  • 2vCPU
  • 8GB Memory
  • C:\ on VSAN Datastore
  • E:\ on VSAN Datastore
  • F:\ on FreeNAS Datastore

I’ve decided to run my tests side by side my FreeNAS box.  This is my primary storage in my HomeLab.

FreeNAS Host:

  • HP Proliant DL160 G6
  • 24GB Memory
  • 16vCPU w/ Hyperthreading
  • 1Gbps Network
  • 4x Western Digital black 3.5 7200 RPM 1TB Sata Drives (Raid 10)
  • FreeNAS is my current storage location that houses around 20 running Virtual Machines.  I don’t have any excessive performance VMs/Jobs running at the moment.

Let’s start with Iometer. A proven diagnosing/benchmarking tool since early 2000’s.  For the past couple year’s I’ve been using the Atlantis article (https://community.atlantiscomputing.com/blog/Atlantis/August-2013/How-to-use-Iometer-to-Simulate-a-Desktop-Workload.aspx) to configure Iometer parameters that mirror data reads/writes to a VDI/Xenapp environment.

Iometer Results: 

VSAN-Iometer-VSAN2VSAN-Iometer-FreeNAS

CrystalDiskMark Results:

  • Seq – long, sequential operations. For SQL Server, this is somewhat akin to doing backups or doing table scans of perfectly defragmented data, like a data warehouse.
  • 512K – random large operations one at a time.  This doesn’t really match up to how SQL Server works.
  • 4K – random tiny operations one at a time.  This is somewhat akin to a lightly loaded OLTP server.
  • 4K QD32T1 – (Queue Depth 32 and 1 thread) random tiny operations, but many done at a time.  This is somewhat akin to an active OLTP server.

 

VSAN-CrystalDiskMark-VSANVSAN-CrystalDiskMark-FreeNAS2

VSAN Storage Performance Test (Stress Test – 5 Min Duration – Default VM Storage Policy):

VSAN-Integrated Performance-StressTest

VSAN Storage Performance Test (Low Stress Test – 5 Min Duration – Default VM Storage Policy):

VSAN-Integrated Performance-LowStressTest

Summary:

VSAN 6.2 is a great product.  It’s getting better and more feature rich by the release.  This release alone added so much more health/performance/diagnostic information than ever before.  Also, Deduplication and Compression, which were disabled in my tests, shipped as well.  I’m looking forward to the progression of this software and have no doubt VMware will continue improving it.  As for my opinion on using VSAN as my preferred HomeLab storage solution? I think I’ll wait a bit longer, but something I’ll keep my eye on.

Source:


  • 0

VSAN in the Home Lab – Part 1

Storage is one of the building blocks for having a solid foundation.   Even in a Home Lab you need something you can depend on.  Without a solid performing storage solution the experience will suffer and no matter how great the application is written, it won’t function.

In this blog post i’ll walk through the process of enabling, configuring, and health checking VMware VSAN.   In Part 2 of this series I’ll give a bit more insight into the performance with IOPs/Throughput/Latency metrics.

A lot of times the hardware is just as important as the software running it.  Unfortunately I don’t have the luxury for best in class hosts, but they should suffice.

  • Host 1:
    • HP Proliant DL160 G6
      • 64GB of Memory
      • 16vCPU including HT
      • 1Gbps Network
      • Kingston SSD 240GB
      • Seagate 150GB 7200RPM
  • Host 2
    • HP Proliant DL160 G6
      • 48GB Memory
      • 16vCPU including HT
      • 1Gbps Network
      • Kingston SSD 240GB
      • Western Digital 600GB 10k SAS
  • Host 3
    • HP Proliant SE316M1 (Basically a G6 with 2.5 bays instead of 3.5)
      • 24GB Memory
      • 16vCPU including HT
      • 1Gbps Network
      • Samsung 850 SSD – 250GB
      • Western Digital 600GB 10k SAS

Well let’s get started shall we.

  1. Let’s Enable VSAN on the cluster
    1. ENABLE-vsan
    2. I chose the manual approach as I don’t want all disks added.
  2. Enable VSAN on the VMkernel adapters
    1. ENABLE-vsan-vmkernel
  3. If you are using disks that are re-purposed then they might have partitions on them already.  If so, delete the partitions.
    1. ENABLE-vsan-erase
  4. Claim your disks, which is composed of a HDD/SSD tier or an all SSD tier.
    1. ENABLE-vsan-capacity
    2. This automatically added each Host/Disk to one Grouping called ‘Group 1’,

I’d like to say everything worked fine and dandy off the bat, but that wasn’t the case.  I kept getting the error below.  Tried disabling/enabling VSAN, re-configuring VM Storage policy, adjusting fault domains, checking VSAN health, etc…  In the end I had to remove the disk group from each host.  I then recreated the disk group (#4).  After performing this I noticed a new number for my overall storage space increased from 700GB to 2.49TB.  Even better!  Everything seems to be functional now and the health checks are barking at me anymore.

ENABLE-vsan-error

ENABLE-vsan-Storagepol

You should now have a working Datastore that you can add VMs and data into.  You should know that there is a default VM Storage Policy applied to the VSAN datastore.  Should you find the need to tweak settings, you can go here (screenshot).  Visit this site for more information on the values (https://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc%2FGUID-C8E919D0-9D80-4AE1-826B-D180632775F3.html

From an overall perspective you can dive into the health of the VSAN components.  Simply go to the ENABLE-vsan-healthCluster/Monitor/VSAN.  Here you can see they have a few different tabs that paint an overall health picture.  From here you get a glimpse of how healthy your environment looks, as well as running a few performance tests to assess what kind of iops/latency you can expect.

 

ENABLE-vsan-health2

In Part 2 of this blog post series I’ll run some additional performance tools to get a better understanding of how VSAN performs with this hardware.

Stay tuned!

Sources:


Twitter