Huge write-up coming.
Unless you’re a Cisco (Nexus) shop end-to-end there are a few design considerations you need to take into account when it comes to delivering FCoE to your Dell blade servers.
Things to Consider:
-How is the FC from your storage array being encapsulated into Ethernet?
Some storage arrays allow for the direct export of FCoE. Some storage arrays have only FC connectivity options. In this case you will need a device to encapsulate FC into FCOE, a FCF (Fiber Channel Forwarder). Some example FCF devices would be Brocade VDX 6740, Brocade VDX 6730, and Cisco Nexus 5000.
-Are you running some vendor proprietary fabric that allows for multi-hop FCoE like FabricPath or VCS?
If so great! If not, you’re gonna a fun time attempting to forward FCoE beyond the first switch. (Here is a blog explaining those options.)
-Are your servers connected to a true Fibre Channel Forwarding (FCF) access switch or are they connected to Fibre Channel Initialization Protocol (FIP) Snooping access bridge (switch)?
FIP Snooping Bridges (FSB) vs Fibre Channel Forwarders (FCF): A FSB must connect to an FCF in order for FCoE to function. A FCF is a FSB that also provides FC Services like name server as well and FC/FCoE encapsulation.
-If you are using FIP Snooping accesses switches, how are these switches multi-homed?
-FIP Snooping Bridges carrying FCoE cannot be multi-homed to more than one FCF by any means. No vLAG, mLAG or any other type of split chassis LACP, no spanning-tree, no dual-homing, period.
-FIP Snooping Bridges can, in some cases, connect to a single FCF using multiple links bundled in a standard LACP LAG.
How are your servers multi-homed?
Servers cannot be connected to a pair of FCFs using vLAG or mLAG. Servers also cannot be connected to a stack or pair of FSBs using vLAG or mLAG
What we Have Done:
We have 3 different designs we have implemented. All of them have their benefits and drawbacks. This is our attempt to explain them and show you how to configure them.
Build 1. Brocade VDX switches configured in a logical chassis cluster (VCS) providing FC to FCoE encapsulation as well as access to a multi-homed server using round-robin load balancing, not LACP.
Pros: In a perfect world this is how everything would work. Redundancy without any extra links and minimal configuration. Completely converged.
Cons: Dell and Brocade have not come together to build a VDX switch for the M1000e chassis yet.
Notes: You could use 10G pass-through in the back of the chassis to connect directly to VDX switches but thats at least 96 fibers for a 3 chassis rack and 128 for a 4 chassis rack.
Build 2. Brocade VDX switches configured in a logical chassis cluster (VCS) providing FC to FCoE encapsulation. VDX switches connected to Dell MXL switches using vLAGs as well as a dedicated FCoE link per switch. Each server is then multi-homed to a pair of MXL switches using round-robin load balancing.
Pros: Redundant. Converged-ish.
Cons: Complicated. More vendors. There are 4 places in this network where a failure could result in exactly half of your storage paths being lost. May* require use of Uplink Detection Failure on the FSBs to properly fail FCoE after the failure of a FCF.
Notes: FCoE links between the VDX and MXL cannot be multi-homed like the data path. FCoE links can be bundled into a LACP LAG to provide additional bandwidth but specific rules regarding which port groups on the switches you can and cannot use.
Build 3. EMC VNX directly injecting FCoE into Brocade VDX switches configured in a logical chassis cluster (VCS). VDX switches connected to Dell MXL switches using a single link for data and FCoE. Each server is then multi-homed to a pair of MXL switches using round-robin load balancing. This same idea could be applied if the storage was FC only as the VDXs will do the encapsulation.
Pros: Converged.
Cons: More vendors. There are 4 places in this network where a failure could result in exactly half of your storage paths being lost. May* require use of Uplink Detection Failure on the FSBs to properly fail FCoE after the failure of a FCF. Data path redundancy is lost.
Notes: This is an older design using VDX6730s which are now end-of-life. The 6730s do not allow FCoE to traverse the TRILL fabric thus each path from the storage array to the server is completely isolated to either side of the network.
Configuration:
All of these configurations assume the Brocade VCS fabric is already built and using all default FCoE settings, maps, vlan, etc.
Build 1
Brocade VDX interfaces connecting to storage array exporting FCoE.
interface TenGigabitEthernet 1/0/1mtu 9216no fabric isl enableno fabric trunk enableswitchportswitchport mode trunkswitchport trunk allowed vlan allswitchport trunk tag native-vlanspanning-tree shutdownfcoeport defaultno shutdown
Brocade VDX interfaces connecting to storage array exporting FC.
interface FibreChannel 1/0/1no isl-r_rdytrunk-enablefec-enableno shutdown
Brocade VDX interfaces connecting to server.
interface TenGigabitEthernet 1/0/2mtu 9216no fabric isl enableno fabric trunk enableswitchportswitchport mode trunkswitchport trunk allowed vlan allswitchport trunk tag native-vlanspanning-tree shutdownfcoeport defaultno shutdown
Build 2.
Brocade VDX interfaces connecting to storage array exporting FCoE.
interface TenGigabitEthernet 1/0/1mtu 9216no fabric isl enableno fabric trunk enableswitchportswitchport mode trunkswitchport trunk allowed vlan allswitchport trunk tag native-vlanspanning-tree shutdownfcoeport defaultno shutdown
Brocade VDX interfaces connecting to storage array exporting FC.
interface FibreChannel 1/0/1no isl-r_rdytrunk-enablefec-enableno shutdown
Brocade VDX vLAG interface connecting to Dell MXL LAG to provide data-path.
interface Port-channel 1 vlag ignore-split mtu 9216 switchport switchport mode trunk switchport trunk allowed vlan all switchport trunk tag native-vlan spanning-tree shutdown no shutdown
Dell MXL LAG connecting to Brocade VDX vLAG to provide data-path.
no ip address mtu 12000 portmode hybrid switchport no shutdown
Brocade VDX interface connecting to Dell MXL interface to provide FCOE
interface TenGigabitEthernet 1/0/1mtu 9216no fabric isl enableno fabric trunk enableswitchportswitchport mode trunkswitchport trunk allowed vlan noneswitchport trunk tag native-vlanspanning-tree shutdownfcoeport defaultno shutdown
Dell MXL interface connecting to Brocade VDX interface to provide FCoE
interface TenGigabitEthernet 0/52 no ip address mtu 12000 portmode hybrid switchport fip-snooping port-mode fcf ! protocol lldp no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream no shutdown
Dell MXL VLAN configuration.
interface Vlan 1002no ip addressmtu 2500tagged TenGigabitEthernet 0/1-32,41-52fip-snooping enableno shutdown
Dell MXL feature configuration.
dcb-map FLEXIO_DCB_MAP_PFC_OFFno pfc mode on!feature fip-snoopingfip-snooping enable!protocol lldp
Dell MXL interface connecting to Server.
interface TenGigabitEthernet 0/1no ip addressmtu 12000portmode hybridswitchport spanning-tree pvst edge-port bpduguard!protocol lldpdcbx port-role auto-downstreamno shutdown
Build 3.
Brocade VDX to Dell MXL
Dell MXL to Brocade VDX
Dell MXL to server
Brocade FCOE to FCF Deployment Guide – http://community.brocade.com/dtscp75322/attachments/dtscp75322/ethernet/1203/1/FCoE%20Multipathing%20and%20LAG_Oct2013.pdf
Brocade Storage connectivity
– http://www.brocade.com/downloads/documents/html_product_manuals/brocade-vcs-storage-dp/GUID-F0C36164-140C-452C-80D9-983A37101E07.html
Brocade VDX (6730)
fcoe - default settings
fcoe fabric-map default vlan 1002 priority 3 virtual-fabric 128 fcmap 0E:FC:00 max-enodes 64 enodes-config local advertisement interval 8000 keep-alive timeout ! map default fabric-map default cee-map default
lldp
protocol lldp advertise dcbx-fcoe-app-tlv advertise dcbx-fcoe-logical-link-tlv advertise dcbx-tlv
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/fcoe-config.html
You can always cgnahe the native vlan to some other number (on both sides of the trunk) and VTP updates should flow if you disable VLAN 1 like we do.
Hi there!
Do I get it right that you were able to connect EMC VNX as FC-target to VDX 6740 and Dell servers as FCoE initiators in your Build #1? When one maps the FCoE logical port to physical interface, the VDX switch transits to Access Gateway mode thus locking the FlexPort-configured FC-ports to N_Port mode only. As far as I know, EMC VNX does not allow to change the FC port mode of its built-in SP FibreChannel interfaces . How did you manage to connect N_Port of VNX to N_Port of VDX and “discover” VNX SAN at FCoE initiators?
Hello,
Cool, I was actually looking for something like this…..and yes now the new codes allow Zoning….
I was afraid that the FC target will not work if directly attached to the VDX’s….