Ceph RadosGW Nginx, Tengine, Apache and now Civetweb.

Apache Sucks, Nginx Sucks, Tengine Sucks… The new jam you ask? CIVETWEB!

It’s built into radosgw and is easily enabled, this config below will get you started with civetweb + haproxy.

[client.radosgw.gateway]
 host = s3
 rgw admin entry = ceph-admin-api
 rgw dns name = s3.domain.com
 rgw enable usage log = true
 rgw enable ops log = false
 keyring = /etc/ceph/ceph.client.radosgw.keyring
 log file = /var/log/radosgw/client.radosgw.s3.log
 rgw_frontends = civetweb port=7480

And your haproxy config!

frontend s3
bind *:80
bind *:443 ssl crt /etc/ssl/certs/s3.domain.com.pem
mode http

#ACL for admin api
acl network_allowed src 10.0.1.5
acl restricted_page path_beg /ceph-admin-api
block if restricted_page !network_allowed

#Backend RadosGW CivetWeb
default_backend radosgw

backend radosgw
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server s3-dev localhost:7480

Linux find disk position on jbod enclosure

#!/bin/sh

# Display drive bay for disks connected to SAS expander backplane

for name in /sys/block/* ; do

npath=$(readlink -f $name)

while [ $npath != "/" ] ; do

npath=$(dirname $npath)

ep=$(basename $npath)

if [ -e $npath/sas_device/$ep/bay_identifier ] ; then

bay=$(cat $npath/sas_device/$ep/bay_identifier)

encl=$(cat $npath/sas_device/$ep/enclosure_identifier)

echo "$name has BayID: $bay"

break

fi

done

done

IPTables show it all! Show those interfaces!

Instead of iptables -L run iptables -L -v

i[root@s3 ~]# iptables -L -v
Chain INPUT (policy DROP 6 packets, 408 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT icmp — any any anywhere anywhere icmp echo-reply
74 8345 ACCEPT tcp — ens224 any anywhere anywhere tcp dpt:ssh state NEW,ESTABLISHED
0 0 ACCEPT tcp — ens192 any anywhere anywhere tcp dpt:https
0 0 ACCEPT tcp — ens192 any anywhere anywhere tcp dpt:http
622 73939 ACCEPT tcp — ens224 any 10.0.0.0/8 anywhere
2 209 ACCEPT all — ens224 any anywhere anywhere
0 0 ACCEPT all — ens224 any anywhere anywhere

Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
668 88096 ACCEPT all — any any anywhere anywhere
0 0 ACCEPT all — any any anywhere anywhere state RELATED,ESTABLISHED

EMC ScaleIO on ESXi

Generate a GUID https://www.guidgenerator.com/online-guid-generator.aspx
esxcli software vib install -d /tmp/sdc.zip
reboot
esxcli system module parameters set -m scini -p "IoctlIniGuidStr=552b6419-a478-449b-a02c-16c87066bb8a IoctlMdmIPStr=10.0.32.4,10.0.32.5"
esxcli system module load -m scini

Add ESX host to multi-volumes
Via ID:
scli --map_volume_to_sdc --volume_name CIN1-SCALE-VOL1 --sdc_id c41389be00000005 --allow_multi_map
Via IP:
scli --map_volume_to_sdc --volume_name CIN1-SCALE-VOL1 --sdc_ip 10.0.32.41 --allow_multi_map

Other useful commands.
scli –query_all_volumes
scli –unmap_volume_from_sdc –volume_name Testvol1 –all_sdcs
scli –remove_volume –volume_name Testvol1 –i_am_sure
scli –remove_volume –volume_name vol1 –i_am_sure

CentOS 7 disable IPv6 SLAAC

To disable slaac on a CentOS 7 server /etc/sysconfig/network must be edited.  It must contain these two lines.

NETWORKING_IPV6=yes
IPV6_AUTOCONF=no
/etc/sysconfig/network-scripts/ifcfg-ethx must also be edited.  It must contain this line.
IPV6_AUTOCONF=no
‘/sbin/service network restart’ to restart your server’s networking.

Ceph on ZFS (CentOS)

Create the OSD on your mon, you will use these ID later:
ceph osd create

Update your ceph.conf on all the osd machines.
[osd]
journal_dio = false
filestore_zfs_snap = 1
journal_aio = false

Configure your storage.

zpool create disk1 /dev/sdX
zpool create disk2 /dev/sdX
zfs set mountpoint=/var/lib/ceph/osd/ceph-2 disk1
zfs set mountpoint=/var/lib/ceph/osd/ceph-3 disk2
zfs set xattr=sa disk2
zfs set xattr=sa disk1
ceph-osd -i 2 --mkfs --mkkey
ceph-osd -i 3 --mkfs --mkkey
ceph auth add osd.3 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-3/keyring
ceph auth add osd.2 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-2/keyring

#this makes the init script start them, no osd configuration needed in ceph.conf
touch /var/lib/ceph/osd/ceph-2/sysvinit
touch /var/lib/ceph/osd/ceph-3/sysvinit
service ceph start

Further example/documentation: http://docs.ceph.com/docs/master/install/manual-deployment/

CEPH + CEPH + Dell R310

Disk0 is a 240gb SSD (LVM vg-root)

Disk 1-3 are 4TB sata spindles. (sdb,sdc,sdd)

ceph osd create
ceph-osd -i {osd-num} –mkfs –mkkey
ceph auth add osd.{osd-num} osd ‘allow *’ mon ‘allow rwx’ -i /var/lib/ceph/osd/ceph-{osd-num}/keyring

update ceph.conf for location
start osd daemon

lvcreate -L 60G -n cache-disk1 vg-root
lvcreate -L 60G -n cache-disk2 vg-root
lvcreate -L 60G -n cache-disk3 vg-root
zpool create -o ashift=12 disk1 /dev/sdb
zfs set xattr=sa disk1
zfs set atime=off disk1
zfs set compression=lz4 disk1
zpool add disk1 log /dev/vg-root/cache-disk1
zpool create -o ashift=12 disk1 /dev/sdc
zfs set xattr=sa disk1
zfs set atime=off disk1
zfs set compression=lz4 disk1
zpool add disk2 log /dev/vg-root/cache-disk2
zpool create -o ashift=12 disk1 /dev/sdd
zfs set xattr=sa disk1
zfs set atime=off disk1
zfs set compression=lz4 disk1
zpool add disk3 log /dev/vg-root/cache-disk3