CEPH Scrubbing impact on client io and performance.

Ceph’s default IO priority and class for behind the scene disk operations should be considered required vs best efforts. For those of us who actually utilize our storage for services that require performance will quickly find that deep scrub grinds even the most powerful systems to a halt.

Below are the settings to run the scrub as the lowest possible priority. This REQUIRES CFQ as the scheduler for the spindle disk. Without CFQ you cannot prioritize IO. Since only 1 service utilizes these disk CFQ performance will be comparable to deadline and noop.

Inject the new settings for the existing OSD:
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority 7'
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class idle'

Edit your ceph.conf on your storage nodes to automatically set the the priority at runtime.
#Reduce impact of scrub.
osd_disk_thread_ioprio_class = "idle"
osd_disk_thread_ioprio_priority = 7

You can go a step further and setup redhats optimizations for the system charactistics.
tuned-adm profile latency-performance
This information referenced from multiple sources.

Reference documentation.
http://dachary.org/?p=3268

Disable scrubbing in realtime to determine its impact on your running cluster.
http://dachary.org/?p=3157

A detailed analysis of the scrubbing io impact.
http://blog.simon.leinen.ch/2015/02/ceph-deep-scrubbing-impact.html

OSD Configuration Reference
http://ceph.com/docs/master/rados/configuration/osd-config-ref/

Redhat system tuning.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Performance_Tuning_Guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-tuned_adm.html

Quick and Dirty Ceph Deployment

Replace the disk names and ssd device name.   This will build a ceph cluster with 2 object redundancy in about 5 minutes.

ceph-deploy purge ceph0-mon0 ceph0-mon1 ceph0-mon2 ceph0-node0 ceph0-node1
ceph-deploy purgedata ceph0-mon0 ceph0-mon1 ceph0-mon2 ceph0-node0 ceph0-node1
ceph-deploy forgetkeys


ceph-deploy new ceph0-mon0 ceph0-mon1 ceph0-mon2

echo "osd pool default size = 2" >> ~/ceph.conf
echo "public network = 10.1.8.0/22" >> ~/ceph.conf
echo "cluster network = 10.1.12.0/22" >> ~/ceph.conf
echo "osd journal size = 12000" >> ~/ceph.conf

ceph-deploy install ceph0-mon0 ceph0-mon1 ceph0-mon2 ceph0-node0 ceph0-node1
ceph-deploy mon create-initial

ceph-deploy admin ceph0-mon0 ceph0-mon1 ceph0-mon2 ceph0-node0 ceph0-node1

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

ceph-deploy disk zap ceph0-node0:/dev/oczpcie_4_0_ssd
ceph-deploy disk zap ceph0-node0:/dev/sdb
ceph-deploy disk zap ceph0-node0:/dev/sdc
ceph-deploy disk zap ceph0-node0:/dev/sdd
ceph-deploy disk zap ceph0-node0:/dev/sde
ceph-deploy disk zap ceph0-node0:/dev/sdf
ceph-deploy disk zap ceph0-node0:/dev/sdg
ceph-deploy disk zap ceph0-node0:/dev/sdh
ceph-deploy disk zap ceph0-node0:/dev/sdi
ceph-deploy disk zap ceph0-node0:/dev/sdj
ceph-deploy disk zap ceph0-node0:/dev/sdk
ceph-deploy disk zap ceph0-node0:/dev/sdl
ceph-deploy disk zap ceph0-node0:/dev/sdm

ceph-deploy disk zap ceph0-node1:/dev/oczpcie_4_0_ssd
ceph-deploy disk zap ceph0-node1:/dev/sdb
ceph-deploy disk zap ceph0-node1:/dev/sdc
ceph-deploy disk zap ceph0-node1:/dev/sdd
ceph-deploy disk zap ceph0-node1:/dev/sde
ceph-deploy disk zap ceph0-node1:/dev/sdf
ceph-deploy disk zap ceph0-node1:/dev/sdg
ceph-deploy disk zap ceph0-node1:/dev/sdh
ceph-deploy disk zap ceph0-node1:/dev/sdi
ceph-deploy disk zap ceph0-node1:/dev/sdj
ceph-deploy disk zap ceph0-node1:/dev/sdk
ceph-deploy disk zap ceph0-node1:/dev/sdl
ceph-deploy disk zap ceph0-node1:/dev/sdm

ceph-deploy osd prepare ceph0-node0:/dev/sdb:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdb:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdc:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdc:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdd:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdd:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sde:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sde:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdf:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdf:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdg:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdg:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdh:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdh:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdi:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdi:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdj:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdj:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdk:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdk:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdl:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdl:/dev/oczpcie_4_0_ssd

ceph-deploy osd prepare ceph0-node0:/dev/sdm:/dev/oczpcie_4_0_ssd
ceph-deploy osd prepare ceph0-node1:/dev/sdm:/dev/oczpcie_4_0_ssd