So this issue is pretty interesting.
In my situation I had deleted the /dev/rbdX device so I couldn’t unmap the volume. But because it was still mapped the rbd header was locked.
I was able to determine this with:
[root@ceph0-mon0 ~]# rbd info volume-480ee746-d9d1-4625-833c-8573e2cb7a39 -p cinder-capacity-vol.prd.cin1
rbd image ‘volume-480ee746-d9d1-4625-833c-8573e2cb7a39’:
size 145 GB in 37120 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.75476776dfc3c0
format: 2
features: layering, striping
flags:
stripe unit: 4096 kB
stripe count: 1
So then I looked at the RBD header:
[root@ceph0-mon0 ~]# rados -p cinder-capacity-vol.prd.cin1 listwatchers rbd_header.75476776dfc3c0
watcher=10.1.8.82:0/3896844975 client.7590353 cookie=7
watcher=10.1.8.82:0/3896844975 client.7590353 cookie=8
watcher=10.1.8.82:0/3896844975 client.7590353 cookie=9
So This lead me to the host in question.
root@osc-1001> rbd showmapped
id pool image snap device
0 cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f – /dev/rbd0
1 cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f – /dev/rbd1
2 cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 – /dev/rbd2
3 cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 – /dev/rbd3
4 cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 – /dev/rbd4
Sure enough those were mapped. But none of the /dev/rbdX devices exists.. so I cannot unmap them.
The only solution I have found is to reboot the host.
— Update: http://tracker.ceph.com/issues/2654
Looks like this is fixed on 3.4… My kernel is 3.10. SOL!