I was a little frustrated with the lack of info about Storage Spaces Direct (S2D) on technet, but it seems it has been taken care of as it was updated a few days ago (It took some time writing this blogpost). I found it very frustrating on my part that after the RTM to find out there was like no info on technet to order our hardware. I can imagine that the documentation still has to be processed, but isn’t that where the RTM-GA timeframe is for? And I would at least would have appreciated it to know that there was more on technet to come and a expected timeframe. At least then I would know the info is coming and I wouldn’t had to result in digging on the internet with outdated TP blogs (some of them have been deleted). The vendors had the same problem and the only one who had the basic was from a expensive MVP. This is no criticism to you Carsten, you are well worth your money! 😉 But overall it has taken me 3 months and way too much time to get the basic details!

Also, I expected more from Dell, lot’s of the people I talked to at Dell had the same problem: getting hold on the correct information. After going through some people at Dell, I got a qualified engineer. Although he didn’t have much more info, he told me;

  • Use the reference design. I told him I’ve seen the official reference design, but it has the same specs as in November 2015
  • Dell will present S2D products/appliances. They are based on the R730xd, so they are very interesting. I can’t wait until that time, but I think we will rent our hardware just to be sure. Also, it is very interesting how they will solve the “all servers must be identical in hardware components, drivers, firmware and config.” as you buy a new node a year after your initial node.

So, after that rant (sorry  😈 ) I went all over technet and wrote down some notes, which are the most notable I think. If you are planning a S2D deployment, then I would suggest you dig through technet the same as I did because you probably have a different case. But if you are a IT pro who likes to read about new technology, this will get you the highlights.

  • There are basic requirements! They weren’t there before, but now they are (as always) using “Windows server Catalog” hardware and (succesfully) running the cluster validation tests. I should have no problem with our Dell R730xd hardware. It basicly means you can use any hardware, but I would stick with the mentioned servers and if you use Dell, wait until 2017Q1 when they have preconfigured Dell S2D products/appliances.
  • There is an inteeresting note: “all servers must be identical in hardware components, drivers, firmware and config.”. So will Dell deliver the same HBA’s, NVMe, SSD and HDD for the lifecycle (typically 5-7 years) of their servers?
  • Minimum of Intel Nehalem or later compatible processor. This is 2010 tech, I would suggest to use CPU’s from 2016 as a minimum.
  • Memory requirements: 5Gb of mem per TB on the cache drives. This is interesting, but even if you have 4x 4 TB SSD’s (which are ~5.000 dollar a piece), if you would need 80Gb. As there is no general guideline on how much memory you would need in a node, I wouldn’t build a node with less than 128Gb anyway (Dell suggested 96Gb minimum in their reference design).
  • RDMA: nothing changed, they mention it’s all good as long you have the WS16 logo on it. They sent me “Mellanox Connect X3 Dual Port 10Gb Direct Attach/SFP+ Server Ethernet Network Adapter” in their config.
  • 5+ DWPD on SSD. There is no mention of DWPD on the Dell R730xd (configuration) page, so I asked Dell about this. I think it should be specified as this is probably one of the most important specs of SSD’s.
  • Disks can have sector size of 512n, 512e, 4k. I will probably go for 4k.
  • Something I already learned, but is stated explicitly now: Don’t use RAID HBA controllers! It is possible to set the RAID controller to HBA mode, but MS states explicitly to not use RAID controllers, so I would recommend to use the HBA330 that can be configured.
  • Seperate boot device. This is interesting, because selecting the HBA330 means there is no RAID controller available (not supported in S2D, see the line above) for the two disks in the flex bays. So how do I set a RAID1 mirror? Do I need to use Windows Disk management to create a software mirror? Not very appealing. I’ve asked Dell about this.
  • 1Pb limit per storage pool.
  • Recommendation of limiting the server to 100Tb per server due to resync times if a server goes offline. Remember that a reboot for Windows Updates is downtime too and will trigger a resync too!
  • There was already a S2D calculator online, but now it’s being references from the official technet source, although it still says preview!
  • Some things about caching:
    • Caching is set automatically, except when all disks are the same (all flash or all SSD), then you can set it manually.
    • All flash: only writes are cached.
      Hybrid (different storage disks) means write/reads are cached.
      I think you should have write optimized cache drives always as reads don’t have any impact on lifecycle. The only reason not to is I think if you can predict how many writes you are going to have and size on that. But this is based on the only metric Dell gives: “write optimized, mixed use or read-optimized”, I think this boils down to DWPD writes. If I wouldn’t mind swapping disks, wouldn’t I just you could always go for the “read-optimized” as I have a support contract for 7 years that handles my faulty disks? I will ask Dell.
    • Data is copied before moving to cache, so you have the same resiliency as the whole S2D is designed for.
    • There is a recommendation to use a multiple of capacity drives to cache drives. So 2 cache drives means, 4 or 8 or 12 capacity drives. It makes sense, but I hadn’t thought of that. I will change my design from 10 disks to 12 thanks to this nugget of knowledge.
    • There is some mention of other caches (ReFS, CSV). Just use the defaults on them, if you want CSV cache can be used, depending on your situation.
    • Sizing: start with 10% cache of capacity. They also show you how to monitor the cash to see if you need more, see below!
    • I think Carsten said to use 10-25% of total capacity for a mirror when using hybrid disks. So the mirror partition should be 10%-25% of your total capacity, use the calculator for that!
    • Monitor size of cache with perfmon, use the counter: Cluster storage hybrid disks.
  • Install KB3157663 when deploying!
  • If the drives are not pooled after adding, they might contain data: clear them.
  • Moving from 2 nodes to 3, 4 and more: powershell to unlock 3 way mirror, dual parity, etc.
  • You can update drive firmware updating using powershell. This is very neat! Although drives must support it: contact vendor, so i did.
  • MS has seen it takes 5-30s per drive and is not available during updating, so it will trigger a resync. But it also can be done by S2D health service, which redirects I/O during updating. It waits 7 days minimum between updating servers, very neat! Although it will take quite some time with a 16 node cluster: I count 232 days!
  • To re-balance data over servers you can use powershell: Optimize-StoragePool

So this is what I found out, I will be contacting Dell and Carsten to ask what they think. I will keep you posted.

Categories: S2D, Windows Server

4 Comments

  1. Terry Storey

    Hi Dennis,

    I hear your frustration, lets setup a Skype call to go through your questions ….

    regards

    Terry

    1. Dennis Pennings

      Hi Terry, I only see your reach out now, normally I receive replies to my blogposts in my email but I’ve missed these ones.

      Thanks for the offered help but I’m writing a followup blogpost and most of these things are resvolved by now. Could you send me your contact details if I run into things?

  2. Just wonder, If we have old server that would like to re-purpose, Dell R510 and add to the S2D, will it possible. and can different Server spec added to the S2D as a new node. I don’t think it is possible to buy the same server couple years later down to the road.

    1. Dennis Pennings

      that’s what I thought! In the past we had purposed old hardware for new solutions, but I moved away from that as it gave me all other issues down the road. So I wouldn’t repurpose old hardware for these kind of solutions, but if you do, be sure the hardware is the same, Dell and MS mentions this in their requirements.

      But I have the same question if I buy new hardware.. We are about to order a 4 node cluster, and to benefit from the solutions, we expect to add nodes during the 5-7year lifetime of the cluster (dell offers 7 year support on their hardware). So means this they will keep the R730xd model for another 7 years? I can’t imagine, although they did have their switch PC6248 for a long time for the same reason. I’ve asked Dell again about their view on things.

Leave a Reply

Your email address will not be published. Required fields are marked *