Enable DCB on Dell N4000 and Dell Force10 S6000-on

Share this post:

In my plans for our Dataon S2D-3212 setup i had plans on using our Dell N4000 switches for DCB/RDMA, as we have this in both our datacenters. When i did our first install i had problems when enabling DCB with no-drop on the N4000. The N4000 was running firmware version 6.3.2.3 and we where loosing connectivity to some servers when no-drop was enabled. So we ended up buying some new Dell Force10 S6000-on switches, as the Nic’s in our servers are Mellanox ConnectX-4 40Gbit cards.

Now i have just setup our 2nd Dataon S2D-3212 cluster in our other datacenter with the same N4000 switches. These where running version 6.3.0.16 and when i enabled the no-drop on the interfaces everything worked perfect. So i was happy and i decided to upgrade to the latest version as Dell had came out with a new version 6.3.2.4, that had a fix for mac address dropping over stack. But no go, it had the same issue. So i set the backup firmware as the boot firmware on reload, and booted the switches and all was ok again.

So the config for a Dell N4000 with firmware version 6.3.0.16 is

classofservice traffic-class-group 0 1
classofservice traffic-class-group 1 1
classofservice traffic-class-group 2 1
classofservice traffic-class-group 3 0
classofservice traffic-class-group 4 1
classofservice traffic-class-group 5 1
classofservice traffic-class-group 6 1
traffic-class-group max-bandwidth 50 50 0
traffic-class-group min-bandwidth 50 25 25
traffic-class-group weight 50 50 0

datacenter-bridging
priority-flow-control mode on
priority-flow-control priority 3 no-drop

 

Config for Dell Force10 S6000-on

As i setup our switches with VLT il give you a setup of how that is done as well.

The DCB Config, the switch needs a reboot after the DCB enable. Also the config for DCB is identical for S6000, S6000-on, S4810P and on series running FTOS

conf t

protocol lldp
advertise management-tlv system-capabilities system-description system-name
advertise interface-port-desc

dcb enable

dcb-map RDMA-dcb-map-profile
priority-group 0 bandwidth 50 pfc on
priority-group 1 bandwidth 50 pfc off
priority-pgid 1 1 1 0 1 1 1 1
interface fortyGigE 1/5
description somenamehere
no ip address
mtu 9216
portmode hybrid
switchport
dcb-map RDMA-dcb-map-profile
no shutdown
exit

VLT setup

conf t
protocol spanning-tree rstp
no disable
hello-time 1
max-age 6
forward-delay 4
bridge-priority 0

vlt domain 100
peer-link port-channel 100
back-up destination 192.168.10.2
primary-priority 1
system-mac mac-address de:ad:11:be:ef:01
unit-id 0

As we where using 40gig breakout cables we needed to set the 40gig ports into quadmode.

conf t
stack-unit 1 quad-port-profile 0,8,16,24,32,36,40,44,48,52,56,60,64,68,72,76,80,84,88,92,100,108,116,124
!
stack-unit 1 provision S6000

stack-unit 1 port 27 portmode quad

 

Link ports between the switches

interface fortyGigE 1/31
description SW2_40G
no ip address
no shutdown

interface fortyGigE 1/32
description SW2_40G
no ip address
no shutdown

interface Port-channel 100
description Link to SW2_40G
mtu 9216
portmode hybrid
switchport
no ip address
channel-member fortyGigE 1/31
channel-member fortyGigE 1/32
rate-interval 30
no shutdown

 

Now for any port channel to other switches, like a stack or so on.

conf t
interface TenGigabitEthernet 1/27/1
description
mtu 9216
no ip address

port-channel-protocol LACP
port-channel 50 mode active
no shutdown

interface TenGigabitEthernet 1/27/2
description
mtu 9216
no ip address

port-channel-protocol LACP
port-channel 50 mode active
no shutdown

interface TenGigabitEthernet 1/27/3
description
mtu 9216
no ip address

port-channel-protocol LACP
port-channel 50 mode active
no shutdown

interface TenGigabitEthernet 1/27/4
description
mtu 9216
no ip address

port-channel-protocol LACP
port-channel 50 mode active
no shutdown

 

interface Port-channel 50
no ip address
mtu 9216
portmode hybrid
switchport
vlt-peer-lag port-channel 50
no shutdown

DO NOT tag Port-Channel 100 on any vlan. That will cause connectivity problems. The VLT will transmit the data over the link between the switches if traffic comes on one switch and should go to the another port on the 2nd switch.

 

Now duplicate the setup on the other switch, make sure the system-mac adress on the VLT setup is identical on both sides, and set the back-up destination to diffrent ip adresses on same subnet.

Subscribe to our newsletter

Get the inside scoop! Sign up for our newsletter to stay in the know with all the latest news and updates.

Don’t forget to share this post!

3 thoughts on “Enable DCB on Dell N4000 and Dell Force10 S6000-on”

  1. Hey Jan-Tore!

    I’ve just came across your blog having followed on you Twitter for a while. I’ve been playing with S2D since December and have had lots of issues causing a BSOD. I have a case open with Microsoft so they’re looking at the lsass.exe being the cause. Whilst they’re doing that, I’ve been doing a lot of troubleshooting myself. I’ve just noticed I’m getting a few “Packets Received Discarded” on the Mellanox Traffic Counters. It’ll start with about 811 then hold that count. I’m not sure if that’s normal? Is it as one of the nodes is booting up? Anyway, I’m okay with the client side of things, the Dell N4000 series switches are completely new to me. We’re a school so finances are tight, however we’ve been lucky to be able to buy two stacked N4032’s. There’s not a whole lot on the Internet about configuring a two node RDMA setup using these other than your post. I’m wondering if I’m missing something to cause a few discarded packets. Is this a result of the firmware you found?

    Keep up the good work, people like me really appreciate the work you guys put into sharing
    experiences.

    Thank you

    1. Hello Phil

      Are you running the N4032F with the SFP+? When it comes to the RDMA, make sure you are running the correct N4000 firmware, which is the 6.3.0.16 otherwise the nodrop will not work. Follow my config for the N4000 and all the config is set on the interfaces for the S2D servers.

      Also use the standard MS config for RDMA on windows. What Mellanox cards are you using? ConnectX3 EN? And the servers are brand new? Or this just for testing.

      Regards
      Jan-Tore

Leave a Comment

Scroll to Top