multipath.conf + ScaleIO + XtremIO

# This is a basic configuration file with some examples, for device mapper
# multipath.
#
# For a complete list of the default configuration values, run either
# multipath -t
# or
# multipathd show config
#
# For a list of configuration options with descriptions, see the multipath.conf
# man page

## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
find_multipaths yes
}

#Hide ScaleIO devices.
blacklist {
devnode "^scini[a-z]*"
}

#Multipath XtremIO
devices {
device {
vendor "XtremIO"
product "XtremApp"
path_grouping_policy multibus
path_checker tur
failback immediate
path_selector "queue-length 0"
rr_min_io_rq 1
fast_io_fail_tmo 15
}
}

Juniper NTP and Protecting the Routing Engine

By default your Juniper device will respond to NTP request.  This is bad for two reasons.  One.  Your router can now be used for a NTP reflection attack.  Two.  During this NTP reflection attack your routing engine will run out of resources and stop processing truly important things like BGP, OSPF, VRRP, and (insert protocol of choice).

Enabling NTP is easy.

set system ntp server 192.168.1.50
set system ntp server 192.168.1.51

Ta Da!  But now your router is also an NTP server available to be used, or most likely, abused by anyone.

Protecting the routing engine is slightly more complex than enabling NTP as there are a few variables to consider.

If you are using the command ‘set system ntp source-address 192.168.1.1’ this source address must be allowed by the firewall filter so the router can query itself when the ‘show ntp…’ commands are used.  If you are not specifying a specific source address the routers loopback address must allowed by the firewall filter so the router can query itself.

Using a specific source address.

Note that the prefix-list used in the firewall includes the router’s specified ntp source address.

 

NTP

set system ntp server 192.168.1.50
set system ntp server 192.168.1.51
set system ntp source-address 192.168.1.1

Prefix list of valid NTP servers

set policy-options prefix-list ntp-servers 192.168.1.50/32
set policy-options prefix-list ntp-servers 192.168.1.51/32
set policy-options prefix-list ntp-servers 192.168.1.1/32

Loopback interface

set interfaces lo0 unit 0 family inet filter input protect-re
set interfaces lo0 unit 0 family inet address 1.1.1.1/32

Firewall filter

set firewall family inet filter protect-re term allow-ntp from source-prefix-list ntp-servers
set firewall family inet filter protect-re term allow-ntp from protocol udp
set firewall family inet filter protect-re term allow-ntp from port ntp
set firewall family inet filter protect-re term allow-ntp then accept
set firewall family inet filter protect-re term block-ntp from protocol udp
set firewall family inet filter protect-re term block-ntp from port ntp
set firewall family inet filter protect-re term block-ntp then count blocked-ntp
set firewall family inet filter protect-re term block-ntp then discard
set firewall family inet filter protect-re term allow-all then accept

Not using a specific source address.

Note that the prefix-list used in the firewall includes the router’s loopback address.

NTP

set system ntp server 192.168.1.50
set system ntp server 192.168.1.51

Prefix list of valid NTP servers

set policy-options prefix-list ntp-servers 192.168.1.50/32
set policy-options prefix-list ntp-servers 192.168.1.51/32
set policy-options prefix-list ntp-servers 1.1.1.1/32

The loopback interface and firewall filter remain the same.  More information  can be found in Juniper’s knowledge base.

Update:  Logging of the dropped packets will also cause excessive Routing Engine processing.

Juniper SRX Flow vs Packet Mode

The Juniper SRX as it comes forwards IP traffic based on flows between security zones.  It can be configured to forward traffic based on packets (no fancy security features).  In packet mode an SRX acts just like a router or layer 3 switch. This is useful for labs and learning. If you want installation of cloud email security system, then you can click here!

Run the following command to get an idea of how your SRX is forwarding traffic.
> show security flow status

By default Inet (IPv4) traffic is the only traffic that is configured to forward traffic in flow mode.

To disable this simply delete all of the configuration under the security hierarchy.
# delete security
# commit
# run request system reboot

To enable other traffic types use the following commands

IPv6
# set security fowarding-options family inet6 mode packet-based

MPLS
# set security fowarding-options family mpls mode packet-based

ISO
# set security fowarding-options family iso mode packet-based

You must now commit the configuration and reboot the device.

There is another method to do this that allows you to use both flow and packet mode on the same family which requires firewall rule.  I will go over that in another post. There is also the Azure cloud security compliance that people adopt these days.

LLDP Cisco 3750 + Brocade VDX + Dell MXL + Juniper MX

Here is a brief overview of the  LLDP configuration needed for each device to give you similar information across all your devices.

Cisco 3750
lldp run
lldp tlv-select system-name
lldp tlv-select system-description
lldp tlv-select system-capabilities
lldp tlv-select port-description
lldp tlv-select management-address

Brocade VDX
protocol lldp
advertise optional-tlv management-address
advertise optional-tlv port-description
advertise optional-tlv system-capabilities
advertise optional-tlv system-description
advertise optional-tlv system-name

Dell MXL
protocol lldp
advertise management-tlv management-address system-capabilities system-description system-name
advertise interface-port-desc

Juniper MX
set protocols lldp port-id-subtype interface-name
set protocols lldp interface all

Openstack Kilo (OpenVSwitch) Networking in a nutshell

 

OVS… its simple really!

It’s taken me almost a week to figure out how they expect the OVS networking to work, and no one explains its simple.  So heres a 30 second explanation that will actually make sense.

You have 3 openvswitch bridges,  br-int, br-ex and br-tun.

The VM all get ports on br-int, br-ex is used for actual network traffic and br-tun is used for the tunnel interfaces between instances.

OpenVSwitch creates flow rules with virtual patch cables between br-ex and br-int to provide connectivity.

Add your physical interfaces to br-ex, create a management port with type internal so linux can add ips to it.  In the below example we use load balancing to combine 2 nics for redundancy.

 

ovs-neutron

Commands to build this configuration:

ovs-vsctl add-br br-ex
ovs-vsctl add-br br-int
ovs-vsctl add-br br-tun
ovs-vsctl add-bond br-ex bond0 em1 em2 — set port bond0 bond_mode=balance-slb
ovs-vsctl add-port br-ex mgmt tag=15 — set interface mgmt type=internal

What it should look like:

[root@s2138 ~]# ovs-vsctl show

0646ec2b-3bd3-4bdb-b805-2339a03ad286

    Bridge br-ex

        Port br-ex

            Interface br-ex

                type: internal

        Port mgmt

            tag: 15

            Interface mgmt

                type: internal

        Port “bond0”

            Interface “em1”

            Interface “em2”

    Bridge br-int

        fail_mode: secure

        Port br-int

            Interface br-int

                type: internal

    Bridge br-tun

        Port br-tun

            Interface br-tun

                type: internal

Linux FCOE + Dell Force10 MXL + Brocade VDX Switches + EMC VNX

Huge write-up coming.

 

Unless you’re a Cisco (Nexus) shop end-to-end there are a few design considerations you need to take into account when it comes to delivering FCoE to your Dell blade servers.

Things to Consider:

-How is the FC from your storage array being encapsulated into Ethernet?

Some storage arrays allow for the direct export of FCoE.  Some storage arrays have only FC connectivity options.  In this case you will need a device to encapsulate FC into FCOE, a FCF (Fiber Channel Forwarder).  Some example FCF devices would be Brocade VDX 6740, Brocade VDX 6730, and Cisco Nexus 5000.

-Are you running some vendor proprietary fabric that allows for multi-hop FCoE like FabricPath or VCS?

If so great!  If not, you’re gonna a fun time attempting to forward FCoE beyond the first switch. (Here is a blog explaining those options.)

-Are your servers connected to a true Fibre Channel Forwarding (FCF) access switch or are they connected to Fibre Channel Initialization Protocol (FIP) Snooping access bridge (switch)?

FIP Snooping Bridges (FSB) vs Fibre Channel Forwarders (FCF): A FSB must connect to an FCF in order for FCoE to function.  A FCF is a FSB that also provides FC Services like name server as well and FC/FCoE encapsulation.

-If you are using FIP Snooping accesses switches, how are these switches multi-homed?

-FIP Snooping Bridges carrying FCoE cannot be multi-homed to more than one FCF by any means.  No vLAG, mLAG or any other type of split chassis LACP, no spanning-tree, no dual-homing, period.

-FIP Snooping Bridges can, in some cases, connect to a single FCF using multiple links bundled in a standard LACP LAG.

How are your servers multi-homed?

Servers cannot be connected to a pair of FCFs using vLAG or mLAG.  Servers also cannot be connected to a stack or pair of FSBs using vLAG or mLAG

What we Have Done:

We have 3 different designs we have implemented.  All of them have their benefits and drawbacks.  This is our attempt to explain them and show you how to configure them.

Build 1.  Brocade VDX switches configured in a logical chassis cluster (VCS) providing FC to FCoE encapsulation as well as access to a multi-homed server using round-robin load balancing, not LACP.

Pros: In a perfect world this is how everything would work.  Redundancy without any extra links and minimal configuration.  Completely converged.

Cons: Dell and Brocade have not come together to build a VDX switch for the M1000e chassis yet.

Notes: You could use 10G pass-through in the back of the chassis to connect directly to VDX switches but thats at least 96 fibers for a 3 chassis rack and 128 for a 4 chassis rack.

FC-VCS-Server

Build 2. Brocade VDX switches configured in a logical chassis cluster (VCS) providing FC to FCoE encapsulation.  VDX switches connected to Dell MXL switches using vLAGs as well as a dedicated FCoE link per switch.  Each server is then multi-homed to a pair of MXL switches using round-robin load balancing.

Pros: Redundant.  Converged-ish.

Cons: Complicated.  More vendors.  There are 4 places in this network where a failure could result in exactly half of your storage paths being lost.  May* require use of Uplink Detection Failure on the FSBs to properly fail FCoE after the failure of a FCF.

Notes: FCoE links between the VDX and MXL cannot be multi-homed like the data path.  FCoE links can be bundled into a LACP LAG to provide additional bandwidth but specific rules regarding which port groups on the switches you can and cannot use.

FC-VCS-FCOE

Build 3. EMC VNX directly injecting FCoE into Brocade VDX switches configured in a logical chassis cluster (VCS).  VDX switches connected to Dell MXL switches using a single link for data and FCoE.  Each server is then multi-homed to a pair of MXL switches using round-robin load balancing.  This same idea could be applied if the storage was FC only as the VDXs will do the encapsulation.

Pros: Converged.

Cons: More vendors.  There are 4 places in this network where a failure could result in exactly half of your storage paths being lost.  May* require use of Uplink Detection Failure on the FSBs to properly fail FCoE after the failure of a FCF.  Data path redundancy is lost.

Notes:  This is an older design using VDX6730s which are now end-of-life.  The 6730s do not allow FCoE to traverse the TRILL fabric thus each path from the storage array to the server is completely isolated to either side of the network.

FCOE-VDX-MXL-Server

 

Configuration:

All of these configurations assume the Brocade VCS fabric is already built and using all default FCoE settings, maps, vlan, etc.

Build 1

  Brocade VDX interfaces connecting to storage array exporting FCoE.

interface TenGigabitEthernet 1/0/1
mtu 9216
no fabric isl enable
no fabric trunk enable
switchport
switchport mode trunk
switchport trunk allowed vlan all
switchport trunk tag native-vlan
spanning-tree shutdown
fcoeport default
no shutdown

Brocade VDX interfaces connecting to storage array exporting FC.

interface FibreChannel 1/0/1
no isl-r_rdy
trunk-enable
fec-enable
no shutdown

Brocade VDX interfaces connecting to server.

interface TenGigabitEthernet 1/0/2
mtu 9216
no fabric isl enable
no fabric trunk enable
switchport
switchport mode trunk
switchport trunk allowed vlan all
switchport trunk tag native-vlan
spanning-tree shutdown
fcoeport default
no shutdown

Build 2.

  Brocade VDX interfaces connecting to storage array exporting FCoE.

interface TenGigabitEthernet 1/0/1
mtu 9216
no fabric isl enable
no fabric trunk enable
switchport
switchport mode trunk
switchport trunk allowed vlan all
switchport trunk tag native-vlan
spanning-tree shutdown
fcoeport default
no shutdown

Brocade VDX interfaces connecting to storage array exporting FC.

interface FibreChannel 1/0/1
no isl-r_rdy
trunk-enable
fec-enable
no shutdown

Brocade VDX vLAG interface connecting to Dell MXL LAG to provide data-path.

interface Port-channel 1
vlag ignore-split
mtu 9216
switchport
switchport mode trunk
switchport trunk allowed vlan all
switchport trunk tag native-vlan
spanning-tree shutdown
no shutdown

Dell MXL LAG connecting to Brocade VDX vLAG to provide data-path.

no ip address
mtu 12000
portmode hybrid
switchport
no shutdown

Brocade VDX interface connecting to Dell MXL interface to provide FCOE

interface TenGigabitEthernet 1/0/1
mtu 9216
no fabric isl enable
no fabric trunk enable
switchport
switchport mode trunk
switchport trunk allowed vlan none
switchport trunk tag native-vlan
spanning-tree shutdown
fcoeport default
no shutdown

Dell MXL interface connecting to Brocade VDX interface to provide FCoE

interface TenGigabitEthernet 0/52
no ip address
mtu 12000
portmode hybrid
switchport
fip-snooping port-mode fcf
!
protocol lldp
no advertise dcbx-tlv ets-reco
dcbx port-role auto-upstream
no shutdown

Dell MXL VLAN configuration.

interface Vlan 1002
no ip address
mtu 2500
tagged TenGigabitEthernet 0/1-32,41-52
fip-snooping enable
no shutdown

Dell MXL feature configuration.

dcb-map FLEXIO_DCB_MAP_PFC_OFF
no pfc mode on
!
feature fip-snooping
fip-snooping enable
!
protocol lldp

Dell MXL interface connecting to Server.

interface TenGigabitEthernet 0/1
no ip address
mtu 12000
portmode hybrid
switchport spanning-tree pvst edge-port bpduguard
!
protocol lldp
dcbx port-role auto-downstream
no shutdown

 Build 3.

Brocade VDX to Dell MXL

Dell MXL to Brocade VDX

Dell MXL to server

 

Brocade FCOE to FCF Deployment Guide – http://community.brocade.com/dtscp75322/attachments/dtscp75322/ethernet/1203/1/FCoE%20Multipathing%20and%20LAG_Oct2013.pdf

Brocade Storage connectivity

– http://www.brocade.com/downloads/documents/html_product_manuals/brocade-vcs-storage-dp/GUID-F0C36164-140C-452C-80D9-983A37101E07.html




Brocade VDX (6730)
fcoe - default settings
fcoe
 fabric-map default
 vlan 1002
 priority 3
 virtual-fabric 128
 fcmap 0E:FC:00
 max-enodes 64
 enodes-config local
 advertisement interval 8000
 keep-alive timeout
 !
 map default
 fabric-map default
 cee-map default
lldp
protocol lldp
 advertise dcbx-fcoe-app-tlv
 advertise dcbx-fcoe-logical-link-tlv
 advertise dcbx-tlv

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/fcoe-config.html

The amount of home devices using wifi that you don’t even think about.

I never really thought about it, but almost all electronics now have built in wireless and are connected to your network.  Blu-ray players, game consoles, phones, tablets, tvs, printers!

This is what my home network looks like right now.

home_network

 Bandwidth Usage

With on-demand streaming its rare to even watch cable tv.   I had no idea how much bandwidth I’ve been using, My ISP must hate me!

traffic_nov trffic_dec

Juniper SA (Junos Pulse) Multi User Authentication

We’ve had a Juniper SA700 for around 5 years now and it has proven to be an absolutely brilliant bit of hardware, however we’ve always had this issue where it would bump our connection if we connected to the same user multiple times.

To get around this you can enable “Multiple User Sessions” which translates to multiple sessions per user.

On your main window click “Authentication > Signing In” and check the box for “Enable Multiple User Sessions”

juniper_signing_in

Once selected hit save changes.   Now Navigate to “Users > User Realms > [Users, Or other Realm name] > Authentication Policy > Limits”

Change this value to some sane number, you don’t want your system being tied up with dead connections.  We’ve opted for 5.

juniper_limits

 

Now click save and enjoy multiple connections!

Dell / Force10 MXL Firmware Bugs

We recently overhauled our whole network as our 1Gbps network running on unsupported ebay gear which wasn’t cutting it anymore.  I’ll go into more detail regarding upgrade later but for now I am going to focus on the access switches that we chose,  The Dell / Force10 MXL (DF10MXL)!   We chose the Force10 MXL because it offers both 1 and 10Gbps server-side connectivity and 10 and 40Gbps up-links, “FCOE”, and its well priced!  However with the exception of a couple issues outlined they have been pretty decent switches.

Issue Number 1 – MAC addresses and memory leaks.

We started on firmware version 9.5.0.1.  It did not take us long to realize our VMWare environment was a little too much for these switches.  With VMs and consequently MAC addresses being moved all over our network due to VMotion we started to have random IP address reachability issues and rarely we would have switches reboot.  We quickly learned that issuing the command “clear mac-address-table dynamic all” on the switches servicing the IP address in question resolved the issue and the IP address was again reachable.  After a little time on Google and browsing through Force10 documentation we found the following in the release notes for firmware version 9.6.0.0 which is the latest release after 9.5.0.1.

Microcode (Resolved) (Resolved in version 9.6.0.0)
PR# 140496
Severity: Sev 2
Synopsis: System may experience memory leak when it learns new MAC addresses continuously.
Release Notes: When MAC addresses are learned continuously, the system may fail to release allocated memory if internal software processes are busy processing newly learned MAC addresses and may experience a reboot due to memory exhaustion.
Workaround: None

We found our issue!.. or so we thought.  At the time we did not have access to firmware version 9.6.0.0 so we looked in the archive for the latest release without this issue.  This lead us to 9.5(0.0P2).  After a whole day of downgrading switches, 40 in total, our environment calmed down and our issues disappeared. Yey!

Issue Number 2 – Running hot.

Five weeks later we started to notice some of our switches running extremely hot.  60-100 degrees Celsius or 140-212 degrees Fahrenheit.  We were seeing a lot syslog messages from these switches with reboot warnings but no actual reboots.  It didn’t take long for the reboots to start.  The four to five switches that were running in excess of 70 degrees Celsius started to reboot at random intervals.  After beating our way around Dell support we were able to get some answers.  Firmware version 9.5(0.0P2) contains a bug that does not correctly report temperature / requested fan speed to the M1000e chassis.  The chassis were only running at 30% fan speed regardless of how hot the switches were getting.  For a temporary solution Dell pointed us to the RACADM Command Line Reference Guide found here.  Using this guide we were able to manually set the fan speed on our chassis to cool the switches.  Here is a post explaining exactly how to do that.  We settled on 65% fan speed.  This kept the switches cool and the noise level down.

Issue Number 3 – Stack Formation.

FTOS 9.6.0.0 will not form a 4 switch stack.  No documentation is available as to why.  When the 4th switch joins the stack the 3rd and the 4th switch kernel panic and reboot.

So… Force10 FTOS in a Nutshell.

  • 9.5(0.0P2) contains a bug that does not report temperature and/or requested fan speed correctly to the chassis and as a result it runs too hot and reboots.
  • 9.5.0.1 doesn’t run hot but has mac-address mobility issues which can apparently be worked around by enabling MAC Masquerading.  This is done with one simple command “mac-address-table station-move refresh-arp”  I am hesitant to take this route as we could still experience the memory leak issue noted above.
  • 9.6.0.0 is available and should resolve both of our issues but I am beginning to wonder what other ‘features’ we my find the latest release.
  • Update on 9.6.0.0  If you have more than 3 switches in a stack the 4th switch will continuously reboot as it tries to join the stacked cluster.

More to come.  For now the fans are hard set to 65% and here are some fun graphs to look at showing the temperatures before and after setting the fan speeds.

Operating Temperature Drop on Force10 MXL with 65% minimum fan speed.

MXL-Temps

Power Impact of 65% minimum fan speed.

PDU Power Monitoring

 

Update – 4/1/2015

  • Today is April 1st of 2015.  Dell just recently release 9.7.0.0 and we tested it in a lab for a few weeks before throwing it into production.  FTOS / Dell OS 9.7.0.0 appears to resolve all of our issues.  The switches no longer run hot, the switches form a stack like they should, and we have not had any reboots.  I’ll follow up in a few weeks to let you know if if we happen to have an issues.

Update – 6/9/2015

  • 9.7.0.0 is rock solid.  Use it.