16 x 256GB Samsung 830 SSD Raid 6 with LSI 9266-8i Controller in a Dell R720 (16 Bay)

 

As a systems administrator it seems like I’m constantly battling IO contention and latency in our san and local storage environments. So As months roll by these new SSD drives keep getting cheaper and cheaper, offering better write wear and longer life spans for high write intensive environments, so finally I’m taking the plunge to begin converting our most intensive systems over to solid state.

In the process of exploring solid state disk the samsung 256GB 830 series really stuck out of the crowd. The 830 offers fantastic read and write latency and throughput as well as being one of the only SSD series on the market where both the flash and storage controller are by the same manufacture.

The main reason for chosing the samsung is this benchmark at extreme systems.

 

 

Update: 8/24/12

We ended up going back to the dell H710P after having a few issues with the uEFI bios not playing well with the controller at post.  Not to mention LSI webbios is a horrible pile of useless shit, this is 2012 why the hell do we have this prehistoric pile of crap UI on a raid controller.  Whoever at LSI approved that to be shipped on the cards should be forced to stand in a fire.

The H710P has dells lovely customized controller bios which is keyboard driven EASY to use and FAST to configure with.   Performance of the H710P is actually a little bit better than the 9266-8i while the hardware is identical.

Another major issue with the 9266 is when you would remove a drive *failure simulation* and replace it, the controller would mark the new drive as bad vs treating it as a fresh drive to rebuild on.  Without the CLI or MegaRaid Storage Manager this is a rather annoying problem to deal with as you would need to reboot the system to fix it in WEbiboss11!!111.. POS.

The H710P obviously works with dells unified system and can be accessed a number of ways without the operating system even knowing about it.

 The configuration:

  • 16x Samsung 830 256GB MLC SSD
  • Raid 6 with read and write caching (BBU backed).  64KB Block Size
  • Dell R720 16 Bay 8i SAS6 Expanded Backplane  2 Ports 16 devices.

The Benchmarks!

Here are some prelim benchmarks of the actual performance inside a VMware machine.

LSI 9266-8i

Children see throughput for 32 initial writers  =  214905.26 ops/sec
Parent sees throughput for 32 initial writers   =  198172.68 ops/sec
Min throughput per process                      =    6392.06 ops/sec
Max throughput per process                      =    7173.76 ops/sec
Avg throughput per process                      =    6715.79 ops/sec
Min xfer                                        =  925970.00 ops

Children see throughput for 32 readers          =  734057.97 ops/sec
Parent sees throughput for 32 readers           =  734011.56 ops/sec
Min throughput per process                      =   22833.85 ops/sec
Max throughput per process                      =   23062.16 ops/sec
Avg throughput per process                      =   22939.31 ops/sec
Min xfer                                        = 1038205.00 ops

Children see throughput for 32 random readers   =   55662.96 ops/sec
Parent sees throughput for 32 random readers    =   55662.71 ops/sec
Min throughput per process                      =    1730.88 ops/sec
Max throughput per process                      =    1751.76 ops/sec
Avg throughput per process                      =    1739.47 ops/sec
Min xfer                                        = 1036073.00 ops

Children see throughput for 32 random writers   =   19827.16 ops/sec
Parent sees throughput for 32 random writers    =   19090.45 ops/sec
Min throughput per process                      =     584.53 ops/sec
Max throughput per process                      =     663.61 ops/sec
Avg throughput per process                      =     619.60 ops/sec
Min xfer                                        =  967988.00 ops

Dell H710P

Children see throughput for 32 initial writers  =  489124.60 ops/sec
Parent sees throughput for 32 initial writers   =  435746.51 ops/sec
Min throughput per process                      =   14005.25 ops/sec
Max throughput per process                      =   17028.75 ops/sec
Avg throughput per process                      =   15285.14 ops/sec
Min xfer                                        =  860278.00 ops

Children see throughput for 32 readers          =  678563.56 ops/sec
Parent sees throughput for 32 readers           =  678524.72 ops/sec
Min throughput per process                      =   21111.18 ops/sec
Max throughput per process                      =   21253.53 ops/sec
Avg throughput per process                      =   21205.11 ops/sec
Min xfer                                        = 1041599.00 ops

Children see throughput for 32 random readers   =   59482.27 ops/sec
Parent sees throughput for 32 random readers    =   59482.00 ops/sec
Min throughput per process                      =    1851.91 ops/sec
Max throughput per process                      =    1869.25 ops/sec
Avg throughput per process                      =    1858.82 ops/sec
Min xfer                                        = 1038852.00 ops

Children see throughput for 32 random writers   =   20437.99 ops/sec
Parent sees throughput for 32 random writers    =   19228.06 ops/sec
Min throughput per process                      =     610.33 ops/sec
Max throughput per process                      =     695.63 ops/sec
Avg throughput per process                      =     638.69 ops/sec
Min xfer                                        =  945641.00 ops

 

 

Update 7/20/13!

So we’ve been running this configuration in production for almost a year now without fault.   Performance remains fantastic and we’ve had 0 disk failures or faults.

We’ve began testing on the 840 PRO series of disk and so far testing has not been as favorable, having some minor issues with 512gb drives being kicked from the array or faulting for no apparent reasons.

I can confirm that the 840 pro series are NOT compatible with the 24 bay chassis, the backplane power is designed for 12v utilization and the samsung drives are 5v.  You will have random system lockups with a message about not enough system power available.  If you need to populate a 24 bay chassis we recommend looking at the intel emlc drives which utilize 12v power optimization.

20 Responses

  1. Alex August 9, 2012 / 7:06 am

    We are thinking about the same configuration for using SSD´s with VSphere 5. This endurance test for the Samsung 830 looks very promising and the LSI 92xx Controller are working fine with VMware. Did you have any problems with the LSI Controller together with the 830 ? Is there the garbage collection working well with 830, because we cannot trim these SSd´s with an RAID controller ? Is the a way to read out the SMART values with the LSI controller ?

  2. Tyler Bishop August 25, 2012 / 2:16 pm

    @Alex

    The samsung SSD have their own internal wear leveling and garabage collection process that is NOT reliant on the operating system or controller.

    We had no issues getting the 9266 or h710p to show the array under vmware.

    You can view individual drive information by using MegaCLI. There is a VMware compiled version of the application available.

  3. stephen September 28, 2012 / 2:40 pm

    are you still happy with the Samsung 830 and controller? any updates or issues you ran into? thanks for the info

  4. Gabi October 10, 2012 / 10:21 am

    Hi,

    Excellent, looking to do something similar.

    I always tend to use the disks provided by Dell, but the prices are crazy in the UK at the moment and I have a tight project.

    Just wondering how you are getting on with the H710P and how the compatibility is working out,?

    Secondly, did the R720 pick up the disk ok with no problem? I have been assured by Dell tech support that you can only use their own disks on the 12th Generation servers :S

    Any feedback really appreciated.

    Best wishes,

    G.

  5. Anthony November 11, 2012 / 4:00 pm

    Did you manage to get around the drives being marked as ‘non-dell’ and triggering an alert/amber light on the front LCD panel? I have tried a couple other SSDs on the H700/R710 (Crucial M4, OCZ Agility 3), and both caused this warning to be raised.

  6. Eric November 12, 2012 / 7:33 pm

    Hey Tyler, is this still working out well for you? We were just about to go down the exact same path- R720 / 16 bay / H710P / Samsung 830 256GB SSD- except Windows 2012.
    We wanted to try the Dell Enterprise SSDs but they are just too expensive.

    I was thinking of maybe trying groups of mirrored SSDs to experiment and isolate VMs.

    Anything you recommend, or that I should know to do differently as opposed to the standard SAS setup?

  7. Alex December 13, 2012 / 5:52 pm

    @Tyler

    Is it hard fot you to post some dd command performance over RAID0 with 8 or 16 samsumg SSDs ?

    We are searching for solution to handle 1GB+ writes and same amount of read speeds.

    I managed to get 900MB/s reads/writes on H700 controller on R510 with 4 x Sasmung 830 (512GB) SSDs, but if you add more drives speed stays the same, maybe H710 wil brake this limit.

    Thanks anyway for nice and useful post!

  8. Jeff Kelling June 23, 2013 / 11:47 pm

    Tyler,

    I’m getting ready to do what you did. Use several Samsung 830 drives on a H710P in a dell server. It’s been a while since you posted. Are you still happy with the reliability of these drive in raid and is garbage collection working okay? Is performance still holding up?

    Hard to get the 830s, but there are still some sources. Probably going to get these unless you report back with issues that you have had since your last post.

    Also have both LSI9271 and Dell H710P – would you still recommend using the H710P instead of the LSI for the raid?

    Thanks,

    Jeff

  9. Tyler Bishop July 20, 2013 / 12:06 pm

    Hi Jeff, yes the performance is still fantastic and we’ve had 0 problems with the 830 series.

    We began testing on 840 PRO series, we’ve had some hiccups where the disk randomy get spit out without any actual drive errors. Specifically the 840 PRO 512GB.

    I would say at this point the 830 series setup is bullet proof. Were currently evaluating some tintri and nimble storage sans and hopefully looking to move to those vs local storage.

  10. Tyler Bishop July 20, 2013 / 12:12 pm

    Anthony :

    Did you manage to get around the drives being marked as ‘non-dell’ and triggering an alert/amber light on the front LCD panel? I have tried a couple other SSDs on the H700/R710 (Crucial M4, OCZ Agility 3), and both caused this warning to be raised.

    Not had any issue like this, we have 8 R720s with this configuration and no alerts.

    Alex :

    @Tyler

    Is it hard fot you to post some dd command performance over RAID0 with 8 or 16 samsumg SSDs ?

    We are searching for solution to handle 1GB+ writes and same amount of read speeds.

    I managed to get 900MB/s reads/writes on H700 controller on R510 with 4 x Sasmung 830 (512GB) SSDs, but if you add more drives speed stays the same, maybe H710 wil brake this limit.

    Thanks anyway for nice and useful post!

    That’s about the limit of a single path of SATA. If you could find SAS disk I’m sure you could break the 1GB/s barrier.

  11. Charles December 9, 2013 / 7:16 am

    Hi. We’ve been trying something very similar to what you have listed in your July 2013 update, and have found a similar problem. We are using a Dell T620 (Dual H710p controller, 2 x 16 slots) with 10 Plextor 512GB M5P SSDs and have seen the identical problem with drives being dropped from the RAID for no apparent reason. We also have a R720 with Samsung 830 drives which has no problems. I believe the Plextor M5P drives and the Samsung 840 Pro drives are nearly identical.

    We tried secure-erase, firmware update, and overprovisioning to try to resolve the drive faults, but they still occur. As you may have noticed, Samsung 830 drives have become very expensive and hard to find. Do you have any suggestions to improve the stability of these unstable SSD RAIDs on Dell 12g/H710p hardware? At this point, we’re almost ready to junk the Plextor M5P drives and replace them with Intel DC S3500 models.

    Also, if you know any other brands of 3rd party SSDs which are commonly available, affordable, and problem-free like the Samsung 830 drives, I’m sure we’d all like to know what they are.

    • Tyler Bishop January 1, 2014 / 11:06 am

      We’ve moved to crucial M500 drives. 480 and 960 are flawless.

      • Ma8 August 7, 2015 / 4:00 pm

        Are m500-s still your best option? I was considering 850 EVO 2Tb on 13 gen H730p and people seemed to experience lot of instability with 850 PRO + H730p controller. The idea to stuff 2U R730xd (dual controllers available) with 24 * 2Tb is very appealing, only if I knew for sure it would fly. Recent feedback on the web is controversial, and Enterprise SAS SSDS are still severely out of budget.
        Thanks!

        • Tyler Bishop September 8, 2015 / 1:09 pm

          We’ve moved to all Crucial M500 series drives. We don’t use samsung at all now.

  12. Charles April 16, 2014 / 4:15 am

    @Tyler Bishop
    Aren’t the M500 drives only rated for 72TB of random writes ? Unless that rating is extreme low-ball, the drives would seem to only be useful in a nearly read-only usage pattern.

    Do you know if the M500 drives are compatible with the R720xd (24-bay chassis) ?

  13. Chris November 12, 2015 / 3:46 pm

    Tyler,

    I want to do almost the exact same thing, but for a FreeNAS build. FreeNAS prefers to have direct access to the drives in a JBOD configuration. I want to use a PCIe LSI card just like you did. How did you cable it? My understanding is that the Dell R720 has 2 cables from the backplane to the integrated controller. Each cable supporting 8 drives. The connectors on the LSI 9266-8i support 4 drives each. How did you cable it? I know its an old post, but hoping you see it!

      • Chris November 12, 2015 / 5:25 pm

        I am Sorry I worded my question poorly. I want to do it with the same 16 bay R720 (that I don’t have to look at for reference), and my question is specific to that server. My understanding is that the backplane has 2 connectors that support 8 drives per connector, how did you plug those into the mini-sas ports on the LSI when you had them connected via PCIe LSI since each of the SAS ports on the LSI are 4 drives per port? How did you get from an 8 port connector to a 4 port connector? Are there additional ports on the Mobo? Or is there a breakout cable I am unaware of? Ideally I want to connect by 16 drives in the R720 to an LSI 9220-8i with a SAS expander, or two 9220’s. I hope I am being more clear this time. Thanks so much for your time….I have been pulling my hair out on this!

    • Tyler Bishop November 14, 2015 / 11:04 pm

      Chris,

      You can use a standard mini-sas cable. The LSI 8i cards have 2 ports, and the dell backplane has 2 ports.

      Dell actually makes these cables for pci-e mounted cards. Why don’t you just hit up a vendor like http://optiodata.com and tell them what your doing and what parts you need?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.