Free up disk space from deleted files under running processes.

A lot of the time a large log file will grow and need removed,  most the time these files cannot actually be “deleted” or “cleared” until the service releases its file descriptor.

 

Identify the file:

List files recently deleted that have not been released.

 

root@osc-1015 #> lsof -a +L1
COMMAND      PID USER   FD   TYPE DEVICE   SIZE/OFF NLINK   NODE NAME
systemd-j   1059 root  txt    REG  253,4     278808     0  11628 /usr/lib/systemd/systemd-journald;570ba957 (deleted)
systemd-l   1451 root  txt    REG  253,4     584560     0  33117 /usr/lib/systemd/systemd-logind;570ba957 (deleted)
monitor     1617 root    5w   REG  253,4        500     0 261586 /var/log/openvswitch/ovsdb-server.log-20160404 (deleted)
monitor     1617 root    7u   REG  253,4        141     0     17 /tmp/tmpfsSX0WX (deleted)
ovsdb-ser   1619 root    7u   REG  253,4        141     0     17 /tmp/tmpfsSX0WX (deleted)
monitor     1722 root    3w   REG  253,4     474455     0 261589 /var/log/openvswitch/ovs-vswitchd.log-20160404 (deleted)
ceph-osd   20462 root  txt    REG  253,4   11589728     0  33573 /usr/bin/ceph-osd;570b9a69 (deleted)
ceph-osd   20686 root  txt    REG  253,4   11589728     0  33573 /usr/bin/ceph-osd;570b9a69 (deleted)
qemu-kvm  107850 qemu    8w   REG  253,4 2207794598     0 261623 /var/lib/nova/instances/8921a9ef-81c4-4a06-be00-7cad86bd6a1c/console.log (deleted)

in this instance I need to clear the console.log file

 

Release the kernel lock:

Now we will release its lock in the kernel.  The key parts here are the PID and FD,   We remove the write flag from the FD and use its ID.

 root@osc-1015#> : > "/proc/107850/fd/8"

 

Once ran the file is released and can be relocked by the process if it begins writing again.

Enabling the Neutron Port Security Extension on an existing installation.

So neutron port security offers a lot of great features but it can get in the way of a fully virtualized datacenter.

Thankfully with the port security extension you can control which ports have mac/arp filtering and which don’t.

The problem:

If you enable port security in ML2 after you install openstack, you will need to update the database for your existing networks or you will have all sorts of provisioning errors and issues with creating ports.

The Solution:

Navigate to your neutron database and then look at “networksecuritybindings”

For this example I will show you what it looks like in phpmyadmin.

neutron-port-security

As you can see here the database contains the network UUID and a 1/0 for the default option of port security.

Simple insert your network with a default value to fix it.

INSERT INTO `neutron`.`networksecuritybindings` (`network_id`, `port_security_enabled`) VALUES ('4d2da18c-3563-485b-8781-bf5edded6ffb', '1');

multipath.conf + ScaleIO + XtremIO

# This is a basic configuration file with some examples, for device mapper
# multipath.
#
# For a complete list of the default configuration values, run either
# multipath -t
# or
# multipathd show config
#
# For a list of configuration options with descriptions, see the multipath.conf
# man page

## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
find_multipaths yes
}

#Hide ScaleIO devices.
blacklist {
devnode "^scini[a-z]*"
}

#Multipath XtremIO
devices {
device {
vendor "XtremIO"
product "XtremApp"
path_grouping_policy multibus
path_checker tur
failback immediate
path_selector "queue-length 0"
rr_min_io_rq 1
fast_io_fail_tmo 15
}
}

Ceph — Basic Management of OSD location and weight in the crushmap

It’s amazing how crappy hard disk are!   No really!   We operate a 100 disk ceph pool for our object based backups and Its almost a weekly task to replace a failing drive.   I’ve only seen one go entirely unresponsive but normally we get read error and rear failures that stop the osd service and show up in dmesg as faults.

 

To change the weight of a drive:

ceph osd crush reweight osd.90 1.82

To replace a drive:

#Remove old disk
ceph osd out osd.31
ceph osd crush rm osd.31
ceph osd rm osd.31
ceph auth del osd.31
#Provision new disk
ceph-deploy osd prepare --overwrite-conf hostname01:/dev/diskname

Move a host into a different root bucket.

ceph osd crush move hostname01 root=BUCKETNAME