During this summer i decided i wanted to test out Storage Spaces Direct. TP5 was out and i was quite eager to test it out. Now it’s been upgraded to RTM with cluster rolling upgrade. Rember to run Update-ClusterFunctionalLevel after.
So i look arround on ebay for some servers and other items i needed to buy. I ended up with the list under all in 4x
- HP DL380 G6 16 bay 128GB mem, 2x4core Intel CPU HP P420
- HP H220
- MellanoX ConnectX3 MCX312A-XCBT
- Intel 750 NVME PCIe
- 2x Kingston SSDNow V310 for caching(Replacing with Samsung SM863)
- 6x WD Red NAS 1tb 2.5″
- Dell Force Ten S4810P (Already had)
The price for all this without the switches and cables was about 90 000 nok, about 10 000 USD.
I cabled the HP H220 to controll the 2nd 8 bay HDD cage so give the HBA mode for the SSD and HDD. And 2 HP drives for OS in raid 0
I followed the guides from
For S2D setup
https://technet.microsoft.com/windows-server-docs/storage/storage-spaces/hyper-converged-solution-using-storage-spaces-direct
For RDMA setup on S4810P
http://bit.ly/2fty6vw
I setup managment over the 4 1GB netcards on the server. I will just post the powershell commands directly.
New-NetLbfoTeam -name Team1 -TeamMember NIC1,NIC2,NIC3,NIC4 -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort New-VMSwitch “VSwitch” -MinimumBandwidthMode Weight -NetAdapterName “Team1" -AllowManagementOS 0 Set-VMSwitch “VSwitch” -DefaultFlowMinimumBandwidthWeight 50 Add-VMNetworkAdapter -ManagementOS -Name “Management” -SwitchName “VSwitch” Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName “Management” -Access -VlanId "VLANID"
For the config on the server for RDMA and so on use these powershell commands
#Install DCB on the hosts Install-WindowsFeature Data-Center-Bridging #Mellanox/Windows RoCE drivers don\'t support DCBx (yet?), disable it. Set-NetQosDcbxSetting -Willing $False #Make sure RDMA is enable on the NIC (should be by default) Enable-NetAdapterRdma –Name 10GNIC1 Enable-NetAdapterRdma –Name 10GNIC2 #SMB Direct traffic to port 445 is tagged with priority 4 New-NetQosPolicy "SMBDIRECT" -netDirectPortMatchCondition 445 -PriorityValue8021Action 4 #Anything else goes into the "default" bucket with priority tag 1 New-NetQosPolicy "DEFAULT" -default -PriorityValue8021Action 1 #Enable PFC (lossless) on the priority of the SMB Direct traffic. Enable-NetQosFlowControl -Priority 4 #Disable PFC on the other traffic (TCP/IP, we don\'t need that to be lossless) Disable-NetQosFlowControl 0,1,2,3,5,6,7 #Enable QoS on the RDMA interface Enable-NetAdapterQos -InterfaceAlias 10GNIC1 Enable-NetAdapterQos -InterfaceAlias 10GNIC2 #Set the minimum bandwidth for SMB Direct traffic to 90% (ETS, optional) #No need to do this for the other priorities as all those not configured #explicitly goes in to default with the remaining bandwith. New-NetQoSTrafficClass "SMBDirect" -Priority 4 -Bandwidth 90 -Algorithm ETS #Configure VMSwitch and setup VMNetworkAdapters New-VMSwitch -Name SETswitch -NetAdapterName "10GNIC1","10GNIC2" -EnableEmbeddedTeaming $true Add-VMNetworkAdapter -SwitchName SETswitch -Name Cluster -managementOS Add-VMNetworkAdapter -SwitchName SETswitch -Name LiveMigration -managementOS Add-VMNetworkAdapter -SwitchName SETswitch -Name Storage -managementOS Set-VMNetworkAdapterVlan -VMNetworkAdapterName "Cluster" -VlanId 5 -Access -ManagementOS Set-VMNetworkAdapterVlan -VMNetworkAdapterName "LiveMigration" -VlanId 6 -Access -ManagementOS Set-VMNetworkAdapterVlan -VMNetworkAdapterName "Storage" -VlanId 99 -Access -ManagementOS Enable-NetAdapterRDMA "vEthernet (Cluster)","vEthernet (LiveMigration)","vEthernet (Storage)" Get-SmbClientNetworkInterface
Test-Cluster -Node "SRV01", "SRV02", "SRV03", "SRV04" -Include "Storage Spaces Direct",Inventory,Network,"System Configuration" New-Cluster -Name HyperV-S2D -Node "SRV01", "SRV02", "SRV03", "SRV04" -NoStorage -StaticAddress x.x.x.x Enable-ClusterS2D Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosize Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize
Check to see if HDD has been set as capacity tier
If not run this command
Set-StorageTier -FriendlyName Capacity -MediaType HDD
Time to set volume
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName MRV -FileSystem CSVFS_REFS -StorageTierFriendlyName Performance, Capacity -StorageTierSizes 1800GB, 11000GB
enable CSV cache
(Get-Cluster). BlockCacheSize = 1024
This is how it will look in failover cluster manager
6 thoughts on “Our ebay development S2D cluster”
What type of performance are you getting from this hardware?
Pretty good. Have not run a vmfleet test on it. But have run 8 vm’s with diskpeed. And gotten about 550k iops. Så not to dissapointed. Old san maxed out at 2000 iops 😉
Nice!
Thanks for the info, plan on building a small one myself and wanted to see what I can expect!
What type of performance are you getting from this hardware?
Pretty good. Have not run a vmfleet test on it. But have run 8 vm’s with diskpeed. And gotten about 550k iops. Så not to dissapointed. Old san maxed out at 2000 iops 😉
Nice!
Thanks for the info, plan on building a small one myself and wanted to see what I can expect!