Ceph RBD Watchers shows nothing but I can’t delete it!

So this issue is pretty interesting.

In my situation I had deleted the /dev/rbdX device so I couldn’t unmap the volume.  But because it was still mapped the rbd header was locked.

I was able to determine this with:

 

[root@ceph0-mon0 ~]# rbd info  volume-480ee746-d9d1-4625-833c-8573e2cb7a39 -p cinder-capacity-vol.prd.cin1

rbd image ‘volume-480ee746-d9d1-4625-833c-8573e2cb7a39’:

size 145 GB in 37120 objects

order 22 (4096 kB objects)

block_name_prefix: rbd_data.75476776dfc3c0

format: 2

features: layering, striping

flags:

stripe unit: 4096 kB

stripe count: 1

So then I looked at the RBD header:

[root@ceph0-mon0 ~]# rados -p cinder-capacity-vol.prd.cin1 listwatchers rbd_header.75476776dfc3c0

watcher=10.1.8.82:0/3896844975 client.7590353 cookie=7

watcher=10.1.8.82:0/3896844975 client.7590353 cookie=8

watcher=10.1.8.82:0/3896844975 client.7590353 cookie=9

So This lead me to the host in question.

root@osc-1001> rbd showmapped

id pool                         image                                       snap device    

0  cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f –    /dev/rbd0

1  cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f –    /dev/rbd1

2  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd2

3  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd3

4  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd4

Sure enough those were mapped.  But none of the /dev/rbdX devices exists.. so I cannot unmap them.

The only solution I have found is to reboot the host.

— Update:  http://tracker.ceph.com/issues/2654

 

Looks like this is fixed on 3.4… My kernel is 3.10.  SOL!

Openstack Kilo to Liberty.. Where did my config drive go!

If your using config drive it is by default stored in /var/lib/nova/instances/VMUUID

When you move to liberty everything is stored in the ceph vms pool along to maintain high availability access.

If you have machines that do not use ceph you will need to import their config drives.

rbd import -p vms --id cinder /var/lib/nova/instances/VMUUID/disk.config VMUUID_disk.config