Recovery CentOS 7 with software mdadm

Boot rescue centos mode with live disk

edit /etc/mdadm.conf

DEVICE /dev/sda1

DEVICE /dev/sdb1

mdadm --examine --scan

mdadm --examine --scan >> /etc/mdadm.conf

mdadm --assemble --scan /dev/mdX

mount /dev/mdX2 /mnt/sysroot

mount /dev/mdX1 /mnt/sysroot/boot

mount --bind /sys /mnt/sysroot/sys

mount --bind /proc /mnt/sysroot/proc

mount --bind /dev /mnt/sysroot/dev

chroot /mnt/sysroot/

grub2-mkconfig -o /boot/grub2/grub.cfg


umount /mnt/sysroot/sys

umount /mnt/sysroot/proc

umount /mnt/sysroot/dev

umount /mnt/sysroot/boot

umount /mnt/sysroot/


Dell MD 36X MD 32XX passwords + lockdown clearing. MD 3620 MD3620F MD3620 MD3850 MD3800f

Use screen, baud is 115200 8N1

Do Ctrl +A, Ctrl + B, hit escape

VXlogin: shellUsr

password: DF4m/2>

Run: lemClearLockdown


Enjoy your now working md3600 which i guess is a netapp product?


Title:     Disk Array Controller
Copyright 2008-2012 NetApp, Inc. All Rights Reserved.

Name:      RC
Date:      10/25/2012
Time:      14:41:57 CDT
Models:    2660
Manager:   devmgr.v1084api04.Manager

Ceph RBD Watchers shows nothing but I can’t delete it!

So this issue is pretty interesting.

In my situation I had deleted the /dev/rbdX device so I couldn’t unmap the volume.  But because it was still mapped the rbd header was locked.

I was able to determine this with:


[root@ceph0-mon0 ~]# rbd info  volume-480ee746-d9d1-4625-833c-8573e2cb7a39 -p cinder-capacity-vol.prd.cin1

rbd image ‘volume-480ee746-d9d1-4625-833c-8573e2cb7a39’:

size 145 GB in 37120 objects

order 22 (4096 kB objects)

block_name_prefix: rbd_data.75476776dfc3c0

format: 2

features: layering, striping


stripe unit: 4096 kB

stripe count: 1

So then I looked at the RBD header:

[root@ceph0-mon0 ~]# rados -p cinder-capacity-vol.prd.cin1 listwatchers rbd_header.75476776dfc3c0

watcher= client.7590353 cookie=7

watcher= client.7590353 cookie=8

watcher= client.7590353 cookie=9

So This lead me to the host in question.

root@osc-1001> rbd showmapped

id pool                         image                                       snap device    

0  cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f –    /dev/rbd0

1  cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f –    /dev/rbd1

2  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd2

3  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd3

4  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd4

Sure enough those were mapped.  But none of the /dev/rbdX devices exists.. so I cannot unmap them.

The only solution I have found is to reboot the host.

— Update:


Looks like this is fixed on 3.4… My kernel is 3.10.  SOL!

Ceph RadosGW Nginx, Tengine, Apache and now Civetweb.

Apache Sucks, Nginx Sucks, Tengine Sucks… The new jam you ask? CIVETWEB!

It’s built into radosgw and is easily enabled, this config below will get you started with civetweb + haproxy.

 host = s3
 rgw admin entry = ceph-admin-api
 rgw dns name =
 rgw enable usage log = true
 rgw enable ops log = false
 keyring = /etc/ceph/ceph.client.radosgw.keyring
 log file = /var/log/radosgw/client.radosgw.s3.log
 rgw_frontends = civetweb port=7480

And your haproxy config!

frontend s3
bind *:80
bind *:443 ssl crt /etc/ssl/certs/
mode http

#ACL for admin api
acl network_allowed src
acl restricted_page path_beg /ceph-admin-api
block if restricted_page !network_allowed

#Backend RadosGW CivetWeb
default_backend radosgw

backend radosgw
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server s3-dev localhost:7480

Linux find disk position on jbod enclosure


# Display drive bay for disks connected to SAS expander backplane

for name in /sys/block/* ; do

npath=$(readlink -f $name)

while [ $npath != "/" ] ; do

npath=$(dirname $npath)

ep=$(basename $npath)

if [ -e $npath/sas_device/$ep/bay_identifier ] ; then

bay=$(cat $npath/sas_device/$ep/bay_identifier)

encl=$(cat $npath/sas_device/$ep/enclosure_identifier)

echo "$name has BayID: $bay"





IPTables show it all! Show those interfaces!

Instead of iptables -L run iptables -L -v

i[root@s3 ~]# iptables -L -v
Chain INPUT (policy DROP 6 packets, 408 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT icmp — any any anywhere anywhere icmp echo-reply
74 8345 ACCEPT tcp — ens224 any anywhere anywhere tcp dpt:ssh state NEW,ESTABLISHED
0 0 ACCEPT tcp — ens192 any anywhere anywhere tcp dpt:https
0 0 ACCEPT tcp — ens192 any anywhere anywhere tcp dpt:http
622 73939 ACCEPT tcp — ens224 any anywhere
2 209 ACCEPT all — ens224 any anywhere anywhere
0 0 ACCEPT all — ens224 any anywhere anywhere

Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
668 88096 ACCEPT all — any any anywhere anywhere
0 0 ACCEPT all — any any anywhere anywhere state RELATED,ESTABLISHED