Enabling the Neutron Port Security Extension on an existing installation.

So neutron port security offers a lot of great features but it can get in the way of a fully virtualized datacenter.

Thankfully with the port security extension you can control which ports have mac/arp filtering and which don’t.

The problem:

If you enable port security in ML2 after you install openstack, you will need to update the database for your existing networks or you will have all sorts of provisioning errors and issues with creating ports.

The Solution:

Navigate to your neutron database and then look at “networksecuritybindings”

For this example I will show you what it looks like in phpmyadmin.

neutron-port-security

As you can see here the database contains the network UUID and a 1/0 for the default option of port security.

Simple insert your network with a default value to fix it.

INSERT INTO `neutron`.`networksecuritybindings` (`network_id`, `port_security_enabled`) VALUES ('4d2da18c-3563-485b-8781-bf5edded6ffb', '1');

multipath.conf + ScaleIO + XtremIO

# This is a basic configuration file with some examples, for device mapper
# multipath.
#
# For a complete list of the default configuration values, run either
# multipath -t
# or
# multipathd show config
#
# For a list of configuration options with descriptions, see the multipath.conf
# man page

## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
find_multipaths yes
}

#Hide ScaleIO devices.
blacklist {
devnode "^scini[a-z]*"
}

#Multipath XtremIO
devices {
device {
vendor "XtremIO"
product "XtremApp"
path_grouping_policy multibus
path_checker tur
failback immediate
path_selector "queue-length 0"
rr_min_io_rq 1
fast_io_fail_tmo 15
}
}

Ceph — Basic Management of OSD location and weight in the crushmap

It’s amazing how crappy hard disk are!   No really!   We operate a 100 disk ceph pool for our object based backups and Its almost a weekly task to replace a failing drive.   I’ve only seen one go entirely unresponsive but normally we get read error and rear failures that stop the osd service and show up in dmesg as faults.

 

To change the weight of a drive:

ceph osd crush reweight osd.90 1.82

To replace a drive:

#Remove old disk
ceph osd out osd.31
ceph osd crush rm osd.31
ceph osd rm osd.31
ceph auth del osd.31
#Provision new disk
ceph-deploy osd prepare --overwrite-conf hostname01:/dev/diskname

Move a host into a different root bucket.

ceph osd crush move hostname01 root=BUCKETNAME

Openstack Kilo (OpenVSwitch) Networking in a nutshell

 

OVS… its simple really!

It’s taken me almost a week to figure out how they expect the OVS networking to work, and no one explains its simple.  So heres a 30 second explanation that will actually make sense.

You have 3 openvswitch bridges,  br-int, br-ex and br-tun.

The VM all get ports on br-int, br-ex is used for actual network traffic and br-tun is used for the tunnel interfaces between instances.

OpenVSwitch creates flow rules with virtual patch cables between br-ex and br-int to provide connectivity.

Add your physical interfaces to br-ex, create a management port with type internal so linux can add ips to it.  In the below example we use load balancing to combine 2 nics for redundancy.

 

ovs-neutron

Commands to build this configuration:

ovs-vsctl add-br br-ex
ovs-vsctl add-br br-int
ovs-vsctl add-br br-tun
ovs-vsctl add-bond br-ex bond0 em1 em2 — set port bond0 bond_mode=balance-slb
ovs-vsctl add-port br-ex mgmt tag=15 — set interface mgmt type=internal

What it should look like:

[root@s2138 ~]# ovs-vsctl show

0646ec2b-3bd3-4bdb-b805-2339a03ad286

    Bridge br-ex

        Port br-ex

            Interface br-ex

                type: internal

        Port mgmt

            tag: 15

            Interface mgmt

                type: internal

        Port “bond0”

            Interface “em1”

            Interface “em2”

    Bridge br-int

        fail_mode: secure

        Port br-int

            Interface br-int

                type: internal

    Bridge br-tun

        Port br-tun

            Interface br-tun

                type: internal

CEPH Scrubbing impact on client io and performance.

Ceph’s default IO priority and class for behind the scene disk operations should be considered required vs best efforts. For those of us who actually utilize our storage for services that require performance will quickly find that deep scrub grinds even the most powerful systems to a halt.

Below are the settings to run the scrub as the lowest possible priority. This REQUIRES CFQ as the scheduler for the spindle disk. Without CFQ you cannot prioritize IO. Since only 1 service utilizes these disk CFQ performance will be comparable to deadline and noop.

Inject the new settings for the existing OSD:
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority 7'
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class idle'

Edit your ceph.conf on your storage nodes to automatically set the the priority at runtime.
#Reduce impact of scrub.
osd_disk_thread_ioprio_class = "idle"
osd_disk_thread_ioprio_priority = 7

You can go a step further and setup redhats optimizations for the system charactistics.
tuned-adm profile latency-performance
This information referenced from multiple sources.

Reference documentation.
http://dachary.org/?p=3268

Disable scrubbing in realtime to determine its impact on your running cluster.
http://dachary.org/?p=3157

A detailed analysis of the scrubbing io impact.
http://blog.simon.leinen.ch/2015/02/ceph-deep-scrubbing-impact.html

OSD Configuration Reference
http://ceph.com/docs/master/rados/configuration/osd-config-ref/

Redhat system tuning.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Performance_Tuning_Guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-tuned_adm.html

Perforce P4D init.d script (CentOS)

Basic init script to control p4 / p4d for perforce.

Uses /var/p4 as the working directory and p4service as the user.

#!/bin/sh
#
#
# Startup/shutdown script for Perforce
#

# Source function library. this is where ‘daemon’ comes from
. /etc/init.d/functions

prog=Perforce Server

p4d_bin=/usr/local/bin/p4d
p4_bin=/usr/local/bin/p4
p4user=p4service
p4authserver=p4authserver:1667
p4root=/var/p4/root
p4journal=/var/p4/journal
p4port=1818
p4log=/var/p4/log
p4loglevel=3

start () {
echo -n $”Starting $prog: ”

# start

#If you wish to use a perforce auth server add this into the below command line.
# -a $p4authserver

 

#Start the daemon as the p4user.

/bin/su $p4user -c “$p4d_bin -r $p4root -J $p4journal -p $p4port -L $p4log -v server=$p4loglevel -d” &>/dev/null
}

stop () {
# stop
echo -n $”Stopping $prog: ”
$p4_bin -p $p4port admin stop
}

restart() {
stop
start
}

case $1 in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)

echo $”Usage: $prog {start|stop|restart}”
exit 3
esac

exit $RETVAL