High end consumer SSD benchmarks

Running consumer ssd in a server has been deemed hazardous and silly… but that’s only the case when your utilizing a hardware raid solution.

Provided you have UPS systems and software storage that can talk to the disk directly its perfectly safe.  We use Ceph!

These are tested with FIO on a Dell M620 with H310 JBOD mode controller.

Micron/Crucial M500

running IO "sequential read" test... 
	result is 491.86MB per second

 running IO "sequential write" test... 
	result is 421.42MB per second

 running IO "seq read/seq write" test... 
	result is 228.74MB/184.88MB per second

 running IO "random read" test... 
	result is 240.35MB per second
	equals 61530.2 IOs per second

 running IO "random write" test... 
	result is 230.34MB per second
	equals 58968.2 IOs per second

 running IO "rand read/rand write" test... 
	result is 93.90MB/94.01MB per second
	equals 24038.8/24067.5 IOs per second

Micron/Crucial M550

running IO "sequential read" test... 
	result is 523.79MB per second

 running IO "sequential write" test... 
	result is 476.59MB per second

 running IO "seq read/seq write" test... 
	result is 211.70MB/173.50MB per second

 running IO "random read" test... 
	result is 253.36MB per second
	equals 64861.0 IOs per second

 running IO "random write" test... 
	result is 233.42MB per second
	equals 59754.2 IOs per second

 running IO "rand read/rand write" test... 
	result is 102.42MB/102.28MB per second
	equals 26219.5/26184.0 IOs per second

Micron M600

running IO "sequential read" test... 
 result is 507.47MB per second

running IO "sequential write" test... 
 result is 477.18MB per second

running IO "seq read/seq write" test... 
 result is 198.38MB/166.73MB per second

running IO "random read" test... 
 result is 244.66MB per second
 equals 62633.2 IOs per second

running IO "random write" test... 
 result is 238.35MB per second
 equals 61017.5 IOs per second

running IO "rand read/rand write" test... 
 result is 103.10MB/102.95MB per second
 equals 26393.8/26354.0 IOs per second

Sandisk 960GB SSD Extreme Pro

running IO "sequential read" test... 
 result is 394.66MB per second

running IO "sequential write" test... 
 result is 451.28MB per second

running IO "seq read/seq write" test... 
 result is 181.48MB/158.89MB per second

running IO "random read" test... 
 result is 255.99MB per second
 equals 65533.5 IOs per second

running IO "random write" test... 
 result is 223.86MB per second
 equals 57309.2 IOs per second

running IO "rand read/rand write" test... 
 result is 71.47MB/71.46MB per second
 equals 18296.0/18294.2 IOs per second

Crucial MX300 1TB

running IO "sequential read" test... 
	result is 504.80MB per second

 running IO "sequential write" test... 
	result is 501.97MB per second

 running IO "seq read/seq write" test... 
	result is 239.47MB/210.71MB per second

 running IO "random read" test... 
	result is 175.78MB per second
	equals 45000.0 IOs per second

 running IO "random write" test... 
	result is 291.85MB per second
	equals 74713.5 IOs per second

 running IO "rand read/rand write" test... 
	result is 137.10MB/137.09MB per second
	equals 35096.8/35095.0 IOs per second

Dell MD 36X MD 32XX passwords + lockdown clearing. MD 3620 MD3620F MD3620 MD3850 MD3800f

Use screen, baud is 115200 8N1

Do Ctrl +A, Ctrl + B, hit escape

VXlogin: shellUsr

password: DF4m/2>

Run: lemClearLockdown

 

Enjoy your now working md3600 which i guess is a netapp product?

 

==============================================
Title:     Disk Array Controller
Copyright 2008-2012 NetApp, Inc. All Rights Reserved.

Name:      RC
Version:   07.84.44.60
Date:      10/25/2012
Time:      14:41:57 CDT
Models:    2660
Manager:   devmgr.v1084api04.Manager
==============================================

Ceph RBD Watchers shows nothing but I can’t delete it!

So this issue is pretty interesting.

In my situation I had deleted the /dev/rbdX device so I couldn’t unmap the volume.  But because it was still mapped the rbd header was locked.

I was able to determine this with:

 

[root@ceph0-mon0 ~]# rbd info  volume-480ee746-d9d1-4625-833c-8573e2cb7a39 -p cinder-capacity-vol.prd.cin1

rbd image ‘volume-480ee746-d9d1-4625-833c-8573e2cb7a39’:

size 145 GB in 37120 objects

order 22 (4096 kB objects)

block_name_prefix: rbd_data.75476776dfc3c0

format: 2

features: layering, striping

flags:

stripe unit: 4096 kB

stripe count: 1

So then I looked at the RBD header:

[root@ceph0-mon0 ~]# rados -p cinder-capacity-vol.prd.cin1 listwatchers rbd_header.75476776dfc3c0

watcher=10.1.8.82:0/3896844975 client.7590353 cookie=7

watcher=10.1.8.82:0/3896844975 client.7590353 cookie=8

watcher=10.1.8.82:0/3896844975 client.7590353 cookie=9

So This lead me to the host in question.

root@osc-1001> rbd showmapped

id pool                         image                                       snap device    

0  cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f –    /dev/rbd0

1  cinder-capacity-vol.prd.cin1 volume-22567261-a438-4334-8a49-412193e1cd2f –    /dev/rbd1

2  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd2

3  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd3

4  cinder-capacity-vol.prd.cin1 volume-480ee746-d9d1-4625-833c-8573e2cb7a39 –    /dev/rbd4

Sure enough those were mapped.  But none of the /dev/rbdX devices exists.. so I cannot unmap them.

The only solution I have found is to reboot the host.

— Update:  http://tracker.ceph.com/issues/2654

 

Looks like this is fixed on 3.4… My kernel is 3.10.  SOL!

Ceph RadosGW Nginx, Tengine, Apache and now Civetweb.

Apache Sucks, Nginx Sucks, Tengine Sucks… The new jam you ask? CIVETWEB!

It’s built into radosgw and is easily enabled, this config below will get you started with civetweb + haproxy.

[client.radosgw.gateway]
 host = s3
 rgw admin entry = ceph-admin-api
 rgw dns name = s3.domain.com
 rgw enable usage log = true
 rgw enable ops log = false
 keyring = /etc/ceph/ceph.client.radosgw.keyring
 log file = /var/log/radosgw/client.radosgw.s3.log
 rgw_frontends = civetweb port=7480

And your haproxy config!

frontend s3
bind *:80
bind *:443 ssl crt /etc/ssl/certs/s3.domain.com.pem
mode http

#ACL for admin api
acl network_allowed src 10.0.1.5
acl restricted_page path_beg /ceph-admin-api
block if restricted_page !network_allowed

#Backend RadosGW CivetWeb
default_backend radosgw

backend radosgw
mode http
balance roundrobin
option forwardfor
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server s3-dev localhost:7480

Linux find disk position on jbod enclosure

#!/bin/sh

# Display drive bay for disks connected to SAS expander backplane

for name in /sys/block/* ; do

npath=$(readlink -f $name)

while [ $npath != "/" ] ; do

npath=$(dirname $npath)

ep=$(basename $npath)

if [ -e $npath/sas_device/$ep/bay_identifier ] ; then

bay=$(cat $npath/sas_device/$ep/bay_identifier)

encl=$(cat $npath/sas_device/$ep/enclosure_identifier)

echo "$name has BayID: $bay"

break

fi

done

done

multipath.conf + ScaleIO + XtremIO

# This is a basic configuration file with some examples, for device mapper
# multipath.
#
# For a complete list of the default configuration values, run either
# multipath -t
# or
# multipathd show config
#
# For a list of configuration options with descriptions, see the multipath.conf
# man page

## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
find_multipaths yes
}

#Hide ScaleIO devices.
blacklist {
devnode "^scini[a-z]*"
}

#Multipath XtremIO
devices {
device {
vendor "XtremIO"
product "XtremApp"
path_grouping_policy multibus
path_checker tur
failback immediate
path_selector "queue-length 0"
rr_min_io_rq 1
fast_io_fail_tmo 15
}
}