Openstack Kilo (OpenVSwitch) Networking in a nutshell

 

OVS… its simple really!

It’s taken me almost a week to figure out how they expect the OVS networking to work, and no one explains its simple.  So heres a 30 second explanation that will actually make sense.

You have 3 openvswitch bridges,  br-int, br-ex and br-tun.

The VM all get ports on br-int, br-ex is used for actual network traffic and br-tun is used for the tunnel interfaces between instances.

OpenVSwitch creates flow rules with virtual patch cables between br-ex and br-int to provide connectivity.

Add your physical interfaces to br-ex, create a management port with type internal so linux can add ips to it.  In the below example we use load balancing to combine 2 nics for redundancy.

 

ovs-neutron

Commands to build this configuration:

ovs-vsctl add-br br-ex
ovs-vsctl add-br br-int
ovs-vsctl add-br br-tun
ovs-vsctl add-bond br-ex bond0 em1 em2 — set port bond0 bond_mode=balance-slb
ovs-vsctl add-port br-ex mgmt tag=15 — set interface mgmt type=internal

What it should look like:

[root@s2138 ~]# ovs-vsctl show

0646ec2b-3bd3-4bdb-b805-2339a03ad286

    Bridge br-ex

        Port br-ex

            Interface br-ex

                type: internal

        Port mgmt

            tag: 15

            Interface mgmt

                type: internal

        Port “bond0”

            Interface “em1”

            Interface “em2”

    Bridge br-int

        fail_mode: secure

        Port br-int

            Interface br-int

                type: internal

    Bridge br-tun

        Port br-tun

            Interface br-tun

                type: internal

CEPH Scrubbing impact on client io and performance.

Ceph’s default IO priority and class for behind the scene disk operations should be considered required vs best efforts. For those of us who actually utilize our storage for services that require performance will quickly find that deep scrub grinds even the most powerful systems to a halt.

Below are the settings to run the scrub as the lowest possible priority. This REQUIRES CFQ as the scheduler for the spindle disk. Without CFQ you cannot prioritize IO. Since only 1 service utilizes these disk CFQ performance will be comparable to deadline and noop.

Inject the new settings for the existing OSD:
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority 7'
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class idle'

Edit your ceph.conf on your storage nodes to automatically set the the priority at runtime.
#Reduce impact of scrub.
osd_disk_thread_ioprio_class = "idle"
osd_disk_thread_ioprio_priority = 7

You can go a step further and setup redhats optimizations for the system charactistics.
tuned-adm profile latency-performance
This information referenced from multiple sources.

Reference documentation.
http://dachary.org/?p=3268

Disable scrubbing in realtime to determine its impact on your running cluster.
http://dachary.org/?p=3157

A detailed analysis of the scrubbing io impact.
http://blog.simon.leinen.ch/2015/02/ceph-deep-scrubbing-impact.html

OSD Configuration Reference
http://ceph.com/docs/master/rados/configuration/osd-config-ref/

Redhat system tuning.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Performance_Tuning_Guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-tuned_adm.html

Perforce P4D init.d script (CentOS)

Basic init script to control p4 / p4d for perforce.

Uses /var/p4 as the working directory and p4service as the user.

#!/bin/sh
#
#
# Startup/shutdown script for Perforce
#

# Source function library. this is where ‘daemon’ comes from
. /etc/init.d/functions

prog=Perforce Server

p4d_bin=/usr/local/bin/p4d
p4_bin=/usr/local/bin/p4
p4user=p4service
p4authserver=p4authserver:1667
p4root=/var/p4/root
p4journal=/var/p4/journal
p4port=1818
p4log=/var/p4/log
p4loglevel=3

start () {
echo -n $”Starting $prog: ”

# start

#If you wish to use a perforce auth server add this into the below command line.
# -a $p4authserver

 

#Start the daemon as the p4user.

/bin/su $p4user -c “$p4d_bin -r $p4root -J $p4journal -p $p4port -L $p4log -v server=$p4loglevel -d” &>/dev/null
}

stop () {
# stop
echo -n $”Stopping $prog: ”
$p4_bin -p $p4port admin stop
}

restart() {
stop
start
}

case $1 in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)

echo $”Usage: $prog {start|stop|restart}”
exit 3
esac

exit $RETVAL

Dell M1000e Manually Configure Set Minimal Fan Speed Control

You can manually configure the minimum fan speed of the m1000e so that the chassis maintains a lower operating temperature.

SSH The CMC with the cmc ip address and port 22.  User will be root and calvin unless changed.

Then run:

racadm config -g cfgThermal -o cfgThermalMFSPercent  75

This will set the minimum fan speed to 75%.  You can set it from 0-100%.  Obviously 0% is more like 35% but you won’t be able to tell.

You can view the requested fan speed by the servers in the chassis by running:

racadm getfanreqinfo

Example:

[Server Module Fan Request Table]

<Slot#>   <Server Name>   <Blade Type>       <Power State>  <Presence>   <Fan Request%>   

1         s2086.corp PowerEdgeM610      ON             Present      48               

2         s2087.corp PowerEdgeM610      ON             Present      48               

3         s2088.corp PowerEdgeM610      ON             Present      48               

[Switch Module Fan Request Table]

<IO>      <Name>                           <Type>             <Presence>   <Fan Request%>   

Switch-1  MXL 10/40GbE                     10 GbE KR          Present      30               

Switch-2  MXL 10/40GbE                     10 GbE KR          Present      30               

Switch-3  N/A                              None               Not Present  N/A              

Switch-4  N/A                              None               Not Present  N/A              

Switch-5  N/A                              None               Not Present  N/A              

Switch-6  N/A                              None               Not Present  N/A              

[Minimum Fan Speed %]

65

Rescan linux partition table on active disk with centos 6

If you try to rescan the partition table of an active disk it will fail and require a reboot to discover new partitions.

You can get around this by using partx which will scan for individual new partitions and inject them into the running kernel.

 

#partx -v -a /dev/sda

 

root@linux # partx -l /dev/sda
# 1:      2048-  1026047 (  1024000 sectors,    524 MB)
# 2:   1026048-1048575999 (1047549952 sectors, 536345 MB)
# 3: 1048576000-1572859889 (524283890 sectors, 268433 MB)
# 4:         0-       -1 (        0 sectors,      0 MB)
root@linux# partx -v -a /dev/sda
device /dev/sda: start 0 size 1572864000
gpt: 0 slices
dos: 4 slices
# 1:      2048-  1026047 (  1024000 sectors,    524 MB)
# 2:   1026048-1048575999 (1047549952 sectors, 536345 MB)
# 3: 1048576000-1572859889 (524283890 sectors, 268433 MB)
# 4:         0-       -1 (        0 sectors,      0 MB)
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
added partition 3

 

Installing OpenVSwitch 2.3.1 LTS on CentOS 6

yum install kernel-headers kernel-devel gcc make python-devel openssl-devel kernel-devel, graphviz kernel-debug-devel automake rpm-build redhat-rpm-config libtool git

cd /root/

wget http://ftp.gnu.org/gnu/autoconf/autoconf-2.64.tar.gz

tar xvf autoconf-2.64.tar.gz

cd autoconf-2.64/

./configure

make

make install

 

cd /root/

wget http://openvswitch.org/releases/openvswitch-2.3.1.tar.gz -O /root/openvswitch-2.3.1.tar.gz

 

mkdir /root/rpmbuild/SOURCES

cp /root/openvswitch-2.3.1.tar.gz /root/rpmbuild/SOURCES/

rpmbuild -bb rhel/openvswitch.spec
rpmbuild -bb rhel/openvswitch-kmod-rhel6.spec

rpm -ivh /root/rpmbuild/RPMS/*.rpm

 

You can also use our public repo here for cloudstack.

http://mirror.beyondhosting.net/Cloudstack/