Make your ElasticSearch Fly!

Just run this to reduce the write workload of your cluster… (this isn’t safe for critical data.. fine for logging ect.)

curl -XPUT 'http://127.0.0.1:9200/_all/_settings?preserve_existing=true' -d '{
"index.number_of_replicas" : "0",
"index.translog.durability" : "async",
"index.refresh_interval" : "60s"
}'

XIO Passwords

Default Password for EMC XtremIO:
XtremIO Management Server (XMS)

  • Username: xmsadmin
    password: 123456 (prior to v2.4)
    password: Xtrem10 (v2.4+)

XtremIO Management Secure Upload

  • Username: xmsupload
    Password: xmsupload

XtremIO Management Command Line Interface (XMCLI)

  • Username: tech
    password: 123456 (prior to v2.4)
    password: X10Tech! (v2.4+)

XtremIO Management Command Line Interface (XMCLI)

  • Username: admin
    password: 123456 (prior to v2.4)
    password: Xtrem10 (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)

  • Username: tech
    password: 123456 (prior to v2.4)
    password: X10Tech! (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)

  • Username: admin
    password: 123456 (prior to v2.4)
    password: Xtrem10 (v2.4+)

XtremIO Easy Installation Wizard (on storage controllers / nodes)

  • Username: xinstall
    Password: xiofast1

XtremIO Easy Installation Wizard (on XMS)

  • Username: xinstall
    Password: xiofast1

Basic Input/Output System (BIOS) for storage controllers / nodes

  • Password: emcbios

Basic Input/Output System (BIOS) for XMS

  • Password: emcbios

Objective

  • You want to add 12 additional SSDs to an existing Starter X-Brick (10TB with only 5TB installed) in your environment (less than 12 is not supported, however it is technically possible).

Prerequisites

  • Valid support contract with Dell-EMC, you will need to access documentation that requires a valid login to Dell-EMC support.
  • X-Brick is fully functional and connected to an XMS.
  • Have a copy of the default passwords for XtremIO, I cannot list them here due to the Dell-EMC partner agreement. The accounts you will be using are: tech and xmsadmin. You have to access Dell-EMC support and search for Article number “332100”.
  • Have access to the XtremIO Management System (XMS).
  • The “EMC XtremIO Storage Array Software Installation and Upgrade Guide”, “Chapter 9, Expanding a 10TB Starter X-Brick (5TB)” from Dell-EMC supportcovers this in detail. This procedure does cover the mechanism from the UI and SSH. I had problems with the UI method and was forced to use the SSH procedure.

Step 1 – Install the additional SSD drives into the X-Brick Chassis

  • Open the rack that houses the X-Brick you want to add storage to.
  • Remove the 12 plastic SSD fillers from slots 13 to 24.
  • Install the 12 SSD drives into slots 13 to 24.

Step 2 – Login to the XMS UI

  • The XMS UI will be used to track the SSD drives being brought online via the Alerts & Events screen.
  • The SSH session in Step 3 will be used to issue the commands to bring each SSD online. It takes approximately 3 minutes per SSD.
  • Access the XMS UI by opening a browser and entering https://<XMS IP address> from your JumpBox/Laptop. Download the Java applet and launch it. Accept any Java warnings and login as “tech” (get default password from Dell-EMC support). This is a configured XMS instance, you should see the X-Brick cluster in the UI.
  • Select the Inventory pane of the UI, select the Table View and then select the SSD object. The new SSDs should have a “DPG State” of “Not in DPG” and “Lifecycle State” of “Uninitialized”. The existing SSDs will be “In DPG” and “Healthy” respectively.
  • Make a note of the X-Brick ID, the DPG ID and the DPG “Useful SSD Space”and “User Space”.
  • Keep the XMS UI open with the Alerts & Events window selected. This is how the status of each SSD addition will be tracked.

Step 3 – Initialize each SSD and bring Online

  • Open Putty and SSH to the XMS IP address and login with “xmsadmin” and then with username “tech” (get default password from Dell-EMC support).
  • Use the command “show-ssds” to get the SSD list of the X-Brick, including the WWN identifiers. The WWN identifier for each slot will be used in the following steps.
  • Starting from Slot 13, sequentially execute the following commands. Use the X-Brick, DPG and WWN IDs recorded earlier.
  • Use the command “add-ssd brick-id=”<X-Brick ID>” ssd-uid=”<SSD-WWN>” is-foreign-xtremapp-ssd” to initialize the SSD in the X-Brick. My use-case had SSDs from another X-Brick, so I had to force the command by using the “is-foreign-xtremapp-ssd” flag.
  • Use the command “assign-ssd dpg-id=”<DPG ID>” ssd-uid=”<SSD-WWN>””to add the SSD to the Data Protection Group (DPG).
  • Check the XMS Alerts and Events UI to track the percentage of completion for this task.
  • As each event has completes (it will turn Green with a “Cleared” state), proceed to the next slot, until Slot 24 is reached and completed.
  • Select the Inventory pane of the XMS UI, select the Table View and then select the SSD object. All 25 SSDs should have a “DPG State” of “In DPG” and a “Lifecycle State” of “Healthy”.
  • Then select the Data Protection Groups object and verify the DPG “Useful SSD Space” and “User Space” have doubled.
  • The XMS Dashboard will also show a doubling of Physical Capacity.
  • Your XtremIO X-Brick solution is now ready to provide additional storage services: EMC XtremIO – Provisioning a LUN.

Ceph and IO Schedulers

# Is Rusting?
ACTION=="add|change", KERNEL=="sd[a-z]{1,3}", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="cfq"

# LSI SSD
ACTION==“add|change”, KERNEL==“sd[a-z]{1,3}“, ATTR{queue/rotational}==“0”, ATTR{queue/scheduler}=“noop”, ATTR{queue/nr_requests}=“975”, ATTR{device/queue_depth}=“975"

OpenStack Trove disk-image-builder

The documentation for this process absolutely sucks.. and the fact that no ones updated it (shame on me for even saying that about an opensource project that i can push a branch to…) is pitiful. So.. here’s some useful info for completing an image!

Useful post:
1: https://docs.openstack.org/trove/latest/admin/building_guest_images.html
2: https://ask.openstack.org/en/question/95078/how-do-i-build-a-trove-image/

yum install epel-release git -y
git clone https://git.openstack.org/openstack/diskimage-builder
yum install python2-pip
pip install -r requirements.txt
python setup.py install

Now we clone the trove git repo and add the extra elements as a environment variable.

git clone https://github.com/openstack/trove.git

# Guest Image to be used
export DISTRO=fedora
export DISTRO_VERSION=fedora-minimal
# Guest database to be provisioned
export SERVICE_TYPE=mariadb
export HOST_USERNAME=root
export HOST_SCP_USERNAME=root
export GUEST_USERNAME=trove
export CONTROLLER_IP=controller
export TROVESTACK_SCRIPTS="/root/trove/integration/scripts"
export PATH_TROVE="/opt/trove"
export ESCAPED_PATH_TROVE=$(echo $PATH_TROVE | sed 's/\//\\\//g')
export GUEST_LOGDIR="/var/log/trove"
export ESCAPED_GUEST_LOGDIR=$(echo $GUEST_LOGDIR | sed 's/\//\\\//g')

#path to the ssh keys you want installed on the guest.
export SSH_DIR=~/trove-image/sshkeys/

export DIB_CLOUD_INIT_DATASOURCES="ConfigDrive"
# DATASTORE_PKG_LOCATION defines the location from where the datastore packages can be accessed by the DIB elements. This is applicable only for datastores that do not have a public repository from where their packages can be accessed. This can either be a url to a private repository or a location on the local filesystem that contains the datastore packages.
export DATASTORE_PKG_LOCATION=~/trove-image
export ELEMENTS_PATH=$TROVESTACK_SCRIPTS/files/elements
export DIB_APT_CONF_DIR=/etc/apt/apt.conf.d
export DIB_CLOUD_INIT_ETC_HOSTS=true
#WTF Is this?
#local QEMU_IMG_OPTIONS="--qemu-img-options compat=1.1"

#build the disk image in our home dir.
disk-image-create -a amd64 -o ~/trove-${DISTRO_VERSION}-${SERIVCE_TYPE}.qcow2 -x ${DISTRO_VERSION} ${DISTRO}-guest vm cloud-init-datasources ${DISTRO}-${SERVICE_TYPE}

Your own CentOS 7 SSL CA

Create your CA keys and install them.

mkdir ~/newca
cd ~/newca
openssl genrsa -des3 -out myCA.key 4096
openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.pem
cp myCA.key /etc/pki/CA/private/cakey.pem
cp myCA.pem /etc/pki/CA/cacert.pem
touch /etc/pki/CA/index.txt
echo '1000' > /etc/pki/CA/serial

Create a certificate request and sign it.

openssl req -newkey rsa:2048 -nodes -keyout client.key -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" -out client.csr
openssl ca -in client.csr -days 1000 -out client-.pem

Building Octavia Images with CentOS 7 and Haproxy

Do this in your python virtual env

pip install diskimage-builder
git clone https://github.com/openstack/octavia.git
cd octavia/diskimage-create/
./diskimage-create.sh -b haproxy -a amd64 -o amphora-x64-haproxy -t qcow2 -s 3 -i centos
openstack image create --tag amphora --container-format bare --disk-format qcow2 --file amphora-x64-haproxy.qcow2 Amphora-CentOS7-x64-Haproxy

#Or update an existing image with the tag
glance image-tag-update e4af2c6c-f7fd-4b45-a512-145282236044 amphora

Openstack Ansible Kolla On CentOS 7 with Python VirtualEnv

Useful Links

Operating Kolla – https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html

Advanced Config – https://docs.openstack.org/kolla-ansible/latest/admin/advanced-configuration.html

Docker Image Repo: Probably deploy one of these because the images are fairly large.

NOTE:  firewalld & NetworkManager should be removed.  Docker plays nice with selinux and everything works reliably.   You CAN use firewalld but you will have to open up ports on the outside manually and is outside the scope of kolla.

Deployment Tools Installation:

Deploying OpenStack via Ansible is the new preferred method. This process is loose and changes every release, so heres what I have so far to deploy Rocky release successfully.

#Install deps
yum install epel-release -y yum install ansible python-pip python-virtualenv p
ython-devel libffi-devel gcc openssl-devel libselinux-python -y

#Install docker
curl -sSL https://get.docker.io | bash
mkdir -p /etc/systemd/system/docker.service.d
tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF'
[Service]
MountFlags=shared
EOF
systemctl daemon-reload
systemctl restart docker
systemctl enable docker virtualenv --system-site-packages /opt/openstack/ source /opt/openstack/bin/activate
pip install -U pip
#Install kolla-ansible for our release
pip install --upgrade kolla-ansible==7.0.0
pip install decorators python-openstackclient cp -r /opt/openstack/share/kolla-ansible/etc_examples/kolla /etc/
cp -r /opt/openstack/share/kolla-ansible/ansible/inventory/* ~ echo "ansible_python_interpreter: /opt/openstack/bin/python" >> /etc/kolla/globals.yml kolla-genpwd

Custom configurations:

As of now kolla only supports config overrides for ini based configs. An operator can change the location where custom config files are read from by editing /etc/kolla/globals.ymland adding the following line.

# The directory to merge custom config files the kolla's config files
node_custom_config: "/etc/kolla/config"

Kolla allows the operator to override configuration of services. Kolla will look for a file in /etc/kolla/config/<< service name >>/<< config file >>. This can be done per-project, per-service or per-service-on-specified-host. For example to override scheduler_max_attempts in nova scheduler, the operator needs to create /etc/kolla/config/nova/nova-scheduler.conf with content:

Ironic Kolla Configs:

Ironic needs an initramfs and kernel to boot the install image.  Need to build some images with openstack Image Builder.  Below is just the centos installer images.. these are not what you need. 🙂

mkdir /etc/kolla/config/ironic/ -p
wget http://mirror.beyondhosting.net/centos/7.5.1804/os/x86_64/isolinux/initrd.img -O /etc/kolla/config/ironic/ironic-agent.initramfs
wget http://mirror.beyondhosting.net/centos/7.5.1804/os/x86_64/isolinux/vmlinuz -O /etc/kolla/config/ironic/ironic-agent.kernel

Openstack Client Configuration:

Grab your keystone admin password from /etc/kolla/passwords.yml

kolla-ansible -i vbstack post-deploy
cat /etc/kolla/passwords.yml | grep keystone_admin_password
export OS_USERNAME=admin
export OS_PASSWORD=ttSbL92SubKgOao4Yp39ExERlSrJxhY1jUz3WaCy
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.5.201:35357/v3
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
The following lines can be omitted
export OS_TENANT_ID=eddff72576f44d9e9638a50eb95957e0
export OS_REGION_NAME=RegionOne
export OS_CACERT=/path/to/cacertFile

Run ansible to configure your servers.  This assumes you already created your ansible host env layout.

Running your deployment

The next steps are to run an actual deployment and create all the containers ect.

Some critical pieces in /etc/kolla/globals.yml:

openstack_release: "rocky"

Reconfigure

Redeploy changes for specific services. When you need to make 1 change to a service that will not require the restart of other services, you can specify it directly to reduce the runtime of ansible.

kolla-ansible  -i inventory-config -t nova reconfigure

Tips and Tricks

Kolla ships with several utilities intended to facilitate ease of operation.

/usr/share/kolla-ansible/tools/cleanup-containers is used to remove deployed containers from the system. This can be useful when you want to do a new clean deployment. It will preserve the registry and the locally built images in the registry, but will remove all running Kolla containers from the local Docker daemon. It also removes the named volumes.

/usr/share/kolla-ansible/tools/cleanup-host is used to remove remnants of network changes triggered on the Docker host when the neutron-agents containers are launched. This can be useful when you want to do a new clean deployment, particularly one changing the network topology.

/usr/share/kolla-ansible/tools/cleanup-images --all is used to remove all Docker images built by Kolla from the local Docker cache.

kolla-ansible -i INVENTORY deploy is used to deploy and start all Kolla containers.

kolla-ansible -i INVENTORY destroy is used to clean up containers and volumes in the cluster.

kolla-ansible -i INVENTORY mariadb_recovery is used to recover a completely stopped mariadb cluster.

kolla-ansible -i INVENTORY prechecks is used to check if all requirements are meet before deploy for each of the OpenStack services.

kolla-ansible -i INVENTORY post-deploy is used to do post deploy on deploy node to get the admin openrc file.

kolla-ansible -i INVENTORY pull is used to pull all images for containers.

kolla-ansible -i INVENTORY reconfigure is used to reconfigure OpenStack service.

kolla-ansible -i INVENTORY upgrade is used to upgrades existing OpenStack Environment.

kolla-ansible -i INVENTORY check is used to do post-deployment smoke t

Docker Management:

  1. List all containers (only IDs) docker ps -aq.
  2. Stop all running containersdocker stop $(docker ps -aq)
  3. Remove all containersdocker rm $(docker ps -aq)
  4. Remove all images. docker rmi $(docker images -q)

SSL

ca_01.pem – this refers to your CA certificate pem file. AKA intermediate certificate?

Request Cert

openssl req -newkey rsa:2048 -nodes -keyout client.key -out client.csr

Sign Cert

openssl ca -in client.csr -days 1000 -out client-.pem -batch

Doing things yourself, you learn that way.

I am by no means a professional at automotive paint, body work, paint correction, polishing or window tint. But I don’t think anyone should be scared to learn these skills in fear of damaging something they own. Anything can be fixed, may take some time and a little bit of money, but if you’re not constantly learning what fun is life?

I bought my 1995 Toyota Supra over the winter of 2016/2017. My Supra was in fairly “good” shape considering its a 25 year old car, however that wasn’t good enough for me!

Thanks to the help of a few close friends and youtube, I was able to learn the essential skills of doing automotive body work. This included hammering metal to fit properly, welding in patch metal for rusted panels, adding filler in low spots or rough areas and then how to “block” sand a panel to achieve a very flat surface that gives you that beautiful glassy reflection.

Moving on from sanding you get into primer and surfacer. Ever heard of this? Never in my life would I have believed someone if they told me that spraying such a thin layer of material and then sanding it could change the appearance of the final product so much. After countless coats and blocking I ended up with what experts would call a PERFECT surface to apply paint.

Feeling confident in my prep work, I hauled my car off to be painted by a professional. I chose to utilize a professional painter on this project due to the value of my car, supras in perfect condition are worth between $40,000 and $100,000… not a risk I was willing to take.

AND a few LONG weeks later!  WE HAVE A PAINTED CAR!

 

Now the long and tedious process of reassembly begins.  Fresh paint is extremely soft, in the first 30 days it goes through a process called “outgassing” during which the chemical reaction which causes the paint to harden finishes.  Any sort of wax or sealer on the paint during this time would cause the paint to remain soft and easy to damage.

After carefully reassembling the car and paying a lot of attention to delicate pieces and edges of panels, you end up with a final product something like this.

 

 

But I just couldn’t stop there!

PAINT CORRECTION!

The idea behind paint correction is to “sand” the painted surface to be entirely flat.  Ever been to a car show or seen a car that looks like a piece of glass with a reflection so clear you can see yourself?  That’s paint correction.  All paint naturally has whats called “orange peel” which is that reflective texture in the clear that looks like an orange, while paint is drying there’s a differential in the dry time which creates micro dimples on the surface because of tension.

I have more pictures, need to find them and put them here.

 

Engine / Electrical:

Suspension:

My car has been in ohio its whole 128,000 mile life.. needless to say it saw some winter road salt here… and well.. it fared okay but that’s simply not good enough!  See the theme yet?

Sand blasted and powder coated!

Back in the car fresh as hell!

Ceph Journal Device Ownership

Some devices struggle with persistent ownership due to the driver.   I have some OCZ ssd used for journal that are affected by this.

So I’ve created a udev rule to assign them to the proper user during boot.

Add this to /etc/udev/rules.d/89-ceph-journal.rules

KERNEL=="oczpcie*?" SUBSYSTEM=="block" OWNER="ceph" GROUP="disk" MODE="0660"

Then retrigger it to test

udevadm trigger --action=add
ls -lh /dev/oczpcie_3_0_ssd*
brw-rw---- 1 ceph disk 251, 0 Jul 14 13:01 /dev/oczpcie_3_0_ssd
brw-rw---- 1 ceph disk 251, 1 Jul 14 13:01 /dev/oczpcie_3_0_ssd1
brw-rw---- 1 ceph disk 251, 4 Jul 14 13:01 /dev/oczpcie_3_0_ssd4
brw-rw---- 1 ceph disk 251, 5 Jul 14 13:01 /dev/oczpcie_3_0_ssd5
brw-rw---- 1 ceph disk 251, 6 Jul 14 13:01 /dev/oczpcie_3_0_ssd6
brw-rw---- 1 ceph disk 251, 7 Jul 14 13:01 /dev/oczpcie_3_0_ssd7
brw-rw---- 1 ceph disk 251, 8 Jul 14 13:01 /dev/oczpcie_3_0_ssd8
brw-rw---- 1 ceph disk 251, 9 Jul 14 13:01 /dev/oczpcie_3_0_ssd9