Neutron DB upgrade failure.

"Cannot change column 'ipsec_site_conn_id': used in a foreign key constraint 'cisco_csr_identifier_map_ibfk_1'") [SQL: u'ALTER TABLE cisco_csr_identifier_map MODIFY ipsec_site_conn_id VARCHAR(36) NULL']

The upgrade sql can’t run because of a relation.

 

Remove the relationship on the ipsec_site_conn_id column.   Reduce the varchar from 64 to 32.  Re-create the relationship.

 

Increase ceph cluster pg and pgp placement groups without downtime

From the mailing list:

In fact, when you increase your pg number, the new pgs will have to peer first and during this time, a lot a pg will be unreachable. The best way to upgrade the number of PG of a cluster (you ‘ll need to adjust the number of PGP too) is :

 

  • Don’t forget to apply Goncalo advices to keep your cluster responsive for client operations. Otherwise, all the IO and CPU will be used for the recovery operations and your cluster will be unreachable. Be sure that all these new parameters are in place before upgrading your cluster
  • stop and wait for scrub and deep-scrub operations

ceph osd set noscrub
ceph osd set nodeep-scrub

  • set you cluster in maintenance mode with :
ceph osd set norecover
ceph osd set nobackfill
ceph osd set nodown
ceph osd set noout

wait for your cluster not have scrub or deep-scrub opration anymore
  • upgrade the pg number with a small increment like 256
  • wait for the cluster to create and peer the new pgs (about 30 seconds)
  • upgrade the pgp number with the same increment
  • wait for the cluster to create and peer (about 30 seconds)

(Repeat the last 4 operations until you reach the number of pg and pgp you want

At this time, your cluster is still functionnal.

  • Now you have to unset the maintenance mode
ceph osd unset noout
ceph osd unset nodown
ceph osd unset nobackfill
ceph osd unset norecover

It will take some time to replace all the pgs but at the end you will have a cluster with all pgs active+clean.During all the operation,your cluster will still be functionnal if you have respected Goncalo parameters

  • When all the pgs are active+clean, you can re-enable the scrub and deep-scrub operations

ceph osd unset noscrub
ceph osd unset nodeep-scrub

These are handy tips: http://cephnotes.ksperis.com/blog/2017/03/03/dealing-with-some-osd-timeouts

Recovery CentOS 7 with software mdadm

Boot rescue centos mode with live disk

edit /etc/mdadm.conf

DEVICE /dev/sda1

DEVICE /dev/sdb1

mdadm --examine --scan

mdadm --examine --scan >> /etc/mdadm.conf

mdadm --assemble --scan /dev/mdX




mount /dev/mdX2 /mnt/sysroot

mount /dev/mdX1 /mnt/sysroot/boot

mount --bind /sys /mnt/sysroot/sys

mount --bind /proc /mnt/sysroot/proc

mount --bind /dev /mnt/sysroot/dev

chroot /mnt/sysroot/

grub2-mkconfig -o /boot/grub2/grub.cfg

exit

umount /mnt/sysroot/sys

umount /mnt/sysroot/proc

umount /mnt/sysroot/dev

umount /mnt/sysroot/boot

umount /mnt/sysroot/

sync
reboot

Dell MD 36X MD 32XX passwords + lockdown clearing. MD3600 MD3200 MD3620 MD3620F MD3620 MD3850 MD3800f

Use screen, baud is 115200 8N1

Do Ctrl +A, Ctrl + B, hit escape

VXlogin: shellUsr

password: DF4m/2>

Run: lemClearLockdown

Enjoy your now working md3600 which i guess is a netapp product?

==============================================
Title:     Disk Array Controller
Copyright 2008-2012 NetApp, Inc. All Rights Reserved.

Name:      RC
Version:   07.84.44.60
Date:      10/25/2012
Time:      14:41:57 CDT
Models:    2660
Manager:   devmgr.v1084api04.Manager
==============================================

Ceph RBD Watchers shows nothing but I can’t delete it!

So this issue is pretty interesting.

In my situation I had deleted the /dev/rbdX device so I couldn’t unmap the volume.  But because it was still mapped the rbd header was locked.

I was able to determine this with:

 

[root@ceph0-mon0 ~]# rbd info  volume-480ee746-d9d1-4625-833c-8573e2cb7a39 -p cinder-capacity-vol.prd.cin1

rbd image ‘volume-480ee746-d9d1-4625-833c-8573e2cb7a39’:

size 145 GB in 37120 objects

order 22 (4096 kB objects)

block_name_prefix: rbd_data.75476776dfc3c0

format: 2

features: layering, striping

flags:

stripe unit: 4096 kB

stripe count: 1

So then I looked at the RBD header:

[root@ceph0-mon0 ~]# rados -p cinder-capacity-vol.prd.cin1 listwatchers rbd_header.75476776dfc3c0

watcher=10.1.8.82:0/3896844975 client.7590353 cookie=7

watcher=10.1.8.82:0/3896844975 client.7590353 cookie=8

watcher=10.1.8.82:0/3896844975 client.7590353 cookie=9

So This lead me to the host in question.

root@osc-1001> rbd showmapped

id pool                         image                                       snap device    

0  cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f –    /dev/rbd0

1  cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f –    /dev/rbd1

2  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd2

3  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd3

4  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd4

Sure enough those were mapped.  But none of the /dev/rbdX devices exists.. so I cannot unmap them.

The only solution I have found is to reboot the host.

— Update:  http://tracker.ceph.com/issues/2654

 

Looks like this is fixed on 3.4… My kernel is 3.10.  SOL!