Tired of having to go into system preferences > security to open apps because they’re not signed by apple?
Just run
sudo spctl --master-disable
Tired of having to go into system preferences > security to open apps because they’re not signed by apple?
Just run
sudo spctl --master-disable
"Cannot change column 'ipsec_site_conn_id': used in a foreign key constraint 'cisco_csr_identifier_map_ibfk_1'") [SQL: u'ALTER TABLE cisco_csr_identifier_map MODIFY ipsec_site_conn_id VARCHAR(36) NULL']
The upgrade sql can’t run because of a relation.
Remove the relationship on the ipsec_site_conn_id column. Reduce the varchar from 64 to 32. Re-create the relationship.
From the mailing list:
In fact, when you increase your pg number, the new pgs will have to peer first and during this time, a lot a pg will be unreachable. The best way to upgrade the number of PG of a cluster (you ‘ll need to adjust the number of PGP too) is :
ceph osd set noscrub
ceph osd set nodeep-scrub
(Repeat the last 4 operations until you reach the number of pg and pgp you want
At this time, your cluster is still functionnal.
ceph osd unset noscrub
ceph osd unset nodeep-scrub
These are handy tips: http://cephnotes.ksperis.com/blog/2017/03/03/dealing-with-some-osd-timeouts
Boot rescue centos mode with live disk edit /etc/mdadm.conf DEVICE /dev/sda1 DEVICE /dev/sdb1 mdadm --examine --scan mdadm --examine --scan >> /etc/mdadm.conf mdadm --assemble --scan /dev/mdX mount /dev/mdX2 /mnt/sysroot mount /dev/mdX1 /mnt/sysroot/boot mount --bind /sys /mnt/sysroot/sys mount --bind /proc /mnt/sysroot/proc mount --bind /dev /mnt/sysroot/dev chroot /mnt/sysroot/ grub2-mkconfig -o /boot/grub2/grub.cfg exit umount /mnt/sysroot/sys umount /mnt/sysroot/proc umount /mnt/sysroot/dev umount /mnt/sysroot/boot umount /mnt/sysroot/ sync reboot
So I’m not one to spend money on dumb gadgetry but I recently bought these anker USB charging cables and they have really surprised me.
Yeah just restart the server… that’s seriously the only fix.
Boot into the brocade live firmware OS.
Run
#bcu adapter -list
Ensure your adapter displays.
Then run
Use screen, baud is 115200 8N1
Do Ctrl +A, Ctrl + B, hit escape
VXlogin: shellUsr
password: DF4m/2>
Run: lemClearLockdown
Enjoy your now working md3600 which i guess is a netapp product?
==============================================
Title: Disk Array Controller
Copyright 2008-2012 NetApp, Inc. All Rights Reserved.
Name: RC
Version: 07.84.44.60
Date: 10/25/2012
Time: 14:41:57 CDT
Models: 2660
Manager: devmgr.v1084api04.Manager
==============================================
So this issue is pretty interesting.
In my situation I had deleted the /dev/rbdX device so I couldn’t unmap the volume. But because it was still mapped the rbd header was locked.
I was able to determine this with:
[root@ceph0-mon0 ~]# rbd info volume-480ee746-d9d1-4625-833c-8573e2cb7a39 -p cinder-capacity-vol.prd.cin1
rbd image ‘volume-480ee746-d9d1-4625-833c-8573e2cb7a39’:
size 145 GB in 37120 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.75476776dfc3c0
format: 2
features: layering, striping
flags:
stripe unit: 4096 kB
stripe count: 1
So then I looked at the RBD header:
[root@ceph0-mon0 ~]# rados -p cinder-capacity-vol.prd.cin1 listwatchers rbd_header.75476776dfc3c0
watcher=10.1.8.82:0/3896844975 client.7590353 cookie=7
watcher=10.1.8.82:0/3896844975 client.7590353 cookie=8
watcher=10.1.8.82:0/3896844975 client.7590353 cookie=9
So This lead me to the host in question.
root@osc-1001> rbd showmapped
id pool image snap device
0 cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f – /dev/rbd0
1 cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f – /dev/rbd1
2 cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 – /dev/rbd2
3 cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 – /dev/rbd3
4 cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 – /dev/rbd4
Sure enough those were mapped. But none of the /dev/rbdX devices exists.. so I cannot unmap them.
The only solution I have found is to reboot the host.
— Update: http://tracker.ceph.com/issues/2654
Looks like this is fixed on 3.4… My kernel is 3.10. SOL!
If your using config drive it is by default stored in /var/lib/nova/instances/VMUUID
When you move to liberty everything is stored in the ceph vms pool along to maintain high availability access.
If you have machines that do not use ceph you will need to import their config drives.
rbd import -p vms --id cinder /var/lib/nova/instances/VMUUID/disk.config VMUUID_disk.config