My Experiment to use a HP SAS hard drive cage, as mobile or external to any Alien PC or Server .      
And not buy
HP StorageWorks MSA60 Modular Smart Array 418408-B21 ! ( @$170 used)     
Non of this is for production of any kind, only for LAB work or offline testing of SAS drives (and Raid cards) 

Along a similar vein,  I cover, can  a LSI non HP raid card work in my HP Proliant DL380 G7 like seen here.    G7 means Generation 7 servers.
In fact some HP RAID cards are HP modified, (FW) LSI cards, two of note are...
H220 H221

Can I use the  HP 410 card in my alien PC,  sure you can. (with minor limits) even better is P420  (it can do HBA on Demand"flip one bit, bingo")
INDEX:  Power adaptor
Secret Jack wiring
Wrong card in real HP server
FAQ (short)
RAID TOPO choices
What Ports do I have, how to tell.
How long do HDD last.
Bench marks. (hdd only+WBC)
My project in details.
SAS heating facts.
PSU hacks (for my project)
Cry Babies. (my array is dead, boo hoo)
Jump to DONE !

GOAL?

Targeted: (a lab Server, or test server, random PC for testing SFF HDD 2.5" (SAS)"SMART tests"  and testing SFF cables and testing, RAID cards of any kind, even HBA cards)
I can even  burn-in HDD and RAID cards to see if they are worthy of future usage. (or do bench marks on devices to see if they perform as expected)

SUCCESS, and SOLVED, this IDEA WORKS !

I can do this even using  a $20  old PC used with a x8 PCI-e express slot or larger. x16. 
BOM (parts list / bill of materials):
  • A test PC of any kind (with x8 or wider express slots free, hint if bench marking SAS drives, be sure all express lanes are active, and for sure testing SSD,  Some PC limit lanes to the RAID card )
  • A spare PSU, 300 a watts can be plenty for SAS drives.  one drive has 1.5A dynamic current needs. (so if 8 drives, times 8 that)  (startup current are not a problem if you set the controller to sequence them...)
  • HP card cage with backplane G7 ($25 used)  SFF cage.2.5"
  • HP power cable for  above.$5. 
  • $2 PSU extension DC power cable cut in 1/2 and soldered to above per my instructions below. (excess pins removed too to help better air flows)
  • Normal HP or standard, SFF-8087 data cables 1 or 2 depends if running 4 or 8 drives. (connect both up is best ) HP 493228-005 (cable)
  • Cooling the HDD must not run over 60deg C (140f). ever or life span will be less (MTBF) see how I do that below with a tunnel.  The temp spec, is 5C to 55C, (60C is max case temp ever) 55C is MTBF rule.
  • Some metal or plastic panels or even simple plywood is much easier to build, to form a forced air wind tunnel from SFF cage to PSU, with PSU fan doing all cooling and necessary. (even a newer faster fan , if need be)
  • I used PCB  copper clad material to make my wind tunnel see hear Using  Scroll saw with fine blade.
  • Any RAID card in the PC (even a SAS HBA card) , with 2 SFF-8087 JACK's{ports} (if only 1 jack , you get 4 drives only support)


My story (short):
I wanted, to be able to connect it to any of my spare raid cards. (many)
Even do SSF HDD tests, on them.
I did not like the price, of say the Supermicro  CSE-M14  or m28 cages ( $100 to 200 smackers cost or more)
So I bought a used  HP cage same as what is inside my current G7 server and all parts seen in the BOM above.
I do not need most of this to test one drive , its is for testing a full array of 3 to 8 drives. (and not overheat them) (1 SAS 15k ,laying on a bench will overheat too, so...)
An alternate to all this is just buy 2nd server for $100 used, and name it lab-test only. ( G7 servers are best and dirt cheap now)

See  photos of all key parts below.


The Card cage uses regular SFF-8087 cables, end to end so is perfect.(or HPs) (SFF = Small, Form Factor)
I will test it with my spare, P410 card ,and my LSI-9260 card.(SAS9260)
Next I will,  report all issues, discovered and beyond  this FAQ list:

If you only need 1 to 4 drives only one SFF cable need only be connected to J1, slot 1 to 4 left bay) J6 cable runs slot 5 to 8 right bay.(confirmed by me)
Now the wiring, of my custom power cable. (all wires soldered then heat shrunk tubing used)

POWER to the HP SFF BACK PLANE: ( do not make an error here or boom, blown up drives, test it first using DMM meter then next  a very old  but good 60GB SAS drive, as seen for $5 on Fleabay)
Super simple wiring.
Left is HP 10 pin Molex jack, and right is industry standard 24pin ATX (wiki ATX spec 24pins) Jack.

1/5-----+12vdc------10/11 = yellow
2/4/7---gnd--------   3/5/7   = black
6/10----- +5v------  -4/6       +red
See photo2  below for pinouts on HP end. (a HP secret)
You also need a ground jumper wire from the PC under test, and the Cage under test. (because there are 2 PSU's now and most be common line grounded)  (AWG #18 or larger)



More Details.

The only hard part is cooling. (and if you don't THEY  overheats fast )  They means both the SAS drives and the RAID card it self.
The server sucks air from the front drive caddy's and then there are these 6 cute slots in the backplane PCB,  how to mimic this is not too hard to do, with some sheet metal or plasic panels.

The best way to cure this problem  is to  form up  a  box tunnel , that puts the ATX PSU, on the rear of the cage and let the PSU FAN , suck air just like the server real did.  (why buck the design by HP)
A box tunnel need not be 100% air tight. (the 60f max drive case temp is the rule) (below 55c is the rules stated by MBTF life span)
There are small holes in the  cage already, but can not be used, they would hit the drives.
One hitch is the power jack is in the wrong spot, points left, so I put a blister (cut) on the new sides, so power can feed it easy.
Then large grommet for my new data cable below:(seen only at the end of this page photos)
Cable ssf-8087  (36pins)
or real HP, 493228-005 SFF data cable ($5 used at fleabay) and supports HP sidebands. (for cage errors,etc)
or
The SFF data cables that are standard are here.SFF-8087

Photo #1 (out of my G7 Proliant (DL380) This cages is not working with my SAS9261  RAID card and running AVAGO MSM manager,  I created and ARRAY volume and it works perfectly with  nice surprise the LED work too.


Photo #2: Below:  (confirmed by me to work)
The secret HP Molex jack on above, 
backplane , clearly marked J13 , in this Application pins 3 /8/9 will be dead, and LED logic will be dead too. (I2C sideband logic, 100% Custom to HP only)
I checked the Molex to PCB wiring very carefully. (I did many careful continuity checks)
The i2C pins talk to the U1 and U2 chips, for at least LED logic. (3 LED per drive, green /amber , activity , errors and predictive failures, and off line drives etc.) All these features (LED) will be dead.
Those 2 chips (now off line) may also be part of HPs, SAS background , monitors. (hinted at in the manuals but never detailed how its done in total, not one block diagram to understand this...)

I shorted, pin15 to 16 (turns it on), and wired, 3 each, ground , 5V and 12vdc lines the above by same names.  Bingo, works.
The resistors mentioned above are not needed at all. 


Project  Pre-Testing to prove I can control any overheating...
Here is my PSU to CAGE tunnel chamber air seal, the  methods I will always  use a mix up of parts doing this for sure with materials in my attic, and wood and metal laying about.
Photos...below are , progress...  The duct tape test only to prove concept of will not overheat,  Max measured 46C (hot room 78f) in cool room far less it will be.
What I did is make a plywood base for both  the PSU and CAGE as one. (screwed down)
Then added sides. (one can use card board, first as crude test, to be sure the drives run below 60F ( if that fails put in a faster FAN inside the PSU#2 more CFM ! RPM max)
Using MSM  program see drive cage status and temps, this one is 3 drive RAID, setup and runs perfectly, speed tests 140MB/s.
Get MSM from Broadcom. (finding the correct file there is tricky)  (a Direct URL link to MSM is here) Best is to look for the most current MSM and grab it.  The one here supports vast numbers of LSI cards. Even 9211.

HP has the same application SSA ,it is  even a smarter program and free too at HPe.com  why run RAID blind.?

Snap shots here of the download pick.  (photo views)  first is my MSM live down load, then next, is  my thermal testing, last will be far better casings (my tunnel).



First tests after getting the thing to work great digitally, using MSM only.

My costs are small (under $50) Total. (not counting RAID card after all how can you run server lacking such a card)? HBA cards or real RAID (up to 8 drives at once)
Using cardboard first to test my theory on air flow, with duct tape.

The thermal issues,  that can not be ignored.  The 55c rule, If you ignore these rules,  the drives will burn up. They get so hot your hand hurts touching them. (in less than 1 hour time and for sure bundled in a cluster)
In my rat nasty , hot Garage.
The LSI/SAS card below over heats fast if you fail to use this fan, A real server has  no problems here , only cheap home PC lack proper cooling. (but 3 case fans would work here too) I want it to work sides off., so...
My test PIG,  That which I torture.

HDSentinel.exe running below (so far 1hr later 46c max) In a proper server room at 20C the temp would be 41c)
 DO NOT OVERHEAT YOUR DRIVES nor the RAID CARD. The boot drive "Kingston" is SSD. The 3 below are RAID.  (Learn that overheating it can and will and cause damage to the electronics)
The program is too expensive to buy (IMO) , mine runs on only one PC but has the only working SMART test that can see Through the RAID cards IO ports.
 (they have a list of support cards)
 I do not like the $20-30 per PC license rules (but see this 5PC license) (this Graph mode below is peachy keen, no? as are the great stats there too, trends, etc.)

This is the cardboard test only,  the final test is at the end here.

End cardboard testing, now the end run below.



The END RUN IS HERE, end of the story and project below:
The Purpose
here is  to learn what works and test and diagnose,
How to make my HP spare RAID CAGE work in any ALIEN PC, or other workstation of any kind. (or even using any Alien RAID CARDS (Alien means NOT HP system usage )
To test and burn in any SAS HDD, or run full diagnosis on suspected HDD. (SAS)
If you use a  HBA card you can also do full SMART tests, even run linux  disktest and its SMART test  run wndows crystaldiskinfo.exe.
Results, BINGO it all works perfectly.
Now to finish the job.

HERE, I AM adding my  modified PSU next, then I will finish my tunnel using 0.062" FR4 (fire proof grade4) copper clad PCB raw  material.
This material is easy to hot  glue, screw, drill and even solder the corners to form any box like structures.
Not only is that good, (not pretty) but fairly good as a Faraday shield  as any HAM radio DIY maker knows.  (60db RF shield) Like mu-metal or best here, if really serious on shields, this is a prime source see ripstop!.
Copper and aluminum are very good for shields, EMP? I have no clue, but see above ripstop link. (my usage is to not jam my short wave radios with RFI)
Ground the shield helps lower  noise intrusion, copper/alum good and High Freg. and Mu-Metal at low freq.
See this great job done here >, called Manhattan style construction.(back when I was young we called this making SPACE modules (or DEAD DOG Style with chips glued down legs up (dead) in this way)
My tunnel will be near like this., my wind tunnel. All you need is to tack solder it, and bingo a tunnel is made  for my air flow needs.
In my case there are only 3 sides. not 5  and will be bracket screwed down to a the Plywood base.
SUCCESS !
Final build, final test, passes all test.

The fan runs too slow, and the SAS drives overheat, So I cut the Fan wires and ran fan red wire (hot) to the hard +12vdc, Yellow wired buss.  I added and LED and I remove all the unused cables in the PSU. (Sata/DVD, etc)
I remove the wire to make air flows max. 10k and 15k RPM drives do run hot and hotter stacked with no air,  150f (65c)is possible easy. (do no let them ever go to there)


Proof.  (I let it run for hours, in roasting hot garage, and is a GO ! I now have a porable drive cage for cheap.

 

Alternate IDEAS on PSU's ( light duty here, all I care here is lots of air flow to make my SAS drive bundles of 4  run cool.)
On my $10 PSU 300w from HP, I removed all extra power wires and drilled extra mounting holes. ( do not bend the PSU, below is to make photos more clear)
This allows extra air flow less wires and a faster fan too if need be. (note the lack of heat sinks on  low powered PSU, it uses free air rules and air flow.

This is not mine, I used a HP PSU 300w the below is for illustrative purposes only.  DO NO EVER GO INSIDER HERE WITH THE POWER CABLE CONNECTEDF TO THE WALL POWER JACK.

I had to hot wire the fan to get max RPM, do so I must and did. (or the SAS drives will overheat) In some cases a better fan is needed, YMMV.
The above comments do not apply to huge wattage or brand new PSU's, at all. just older ones and very small. (in fact a new PSU 300 has even less parts and a best buy for sure.)
Warning some junk cloned China, no name PSU's, use 20 year old designs with huge parts counts inside, that are pure junk in our modern better world.  (the run hotter to)
End of project, a success.

Below the line are sections on benchmarks and related topics, like how ca  a LSI-9261 card work in  real HP server using the same cage.
Below is supporting evidence for mixing HP parts with Non HP parts (devices etc.)

BENCHMARKS:
(first off do your own, benchmarks)  The below shows only  3 examples, but you can see it is way faster than any 1 SAS drive, made. way faster.

This RAID chart is not accurate above on the topic of speed, when you use a 1GB  battery  backed up cache on the card.  (get the CACHE VAULT then show bench marks)
Many posts seen online,  tell you it's slow, to write, and are full of beans.  (using cheap toy grade RAID cards sure) but with WBC  (if the bursts last less than 1GB,  my speeds are very fast)
Hint 1 do your own tests, do not listen to others that 1 have never seen a WBC write back cache in their life, nor have they tested with there actual production, IO rates. ever.
Not only that , there is the CACHCADE device that puts a SSD drive cache in the middle of all this, and will keep the most popular data  ready parked in SSD. (like magic)
ATTO is quirky program, W7 it is  ok, W10 you must force it to admin mode and set compatiblity to W7. Or it locks up,but once past this pain, I do like it. As seen below. It is very popular world wide this Application.
ATTO , RAID 5. (4xSAS) 300GB each. 10k RPM
The performance will be best with 15k RPM drives, SAS ( excluding SSD no way to beat that?) using many drives in parallel in RAID 0 is super fast. (and must have a huge cache option and battery on card)
Raid 0 is for gaming or for a lab test computer , built for speed only and backups are your problem not this ARRAY.
The fastest spinners run with 15K SAS 2.5"  (Enterprise) all else is slower , mostly (not talking SSD here)
Physics 101:, the faster the disk spins the quicker you can read the next sector. , the smaller the diameter the faster the head can change track faster for 2 reasons, less mass(arm), and less travel distance(arm).
On HDD, tracks are called Cylinders, but I use the word track as most folks have see a RECORD(relic music medium) or played TRACK , in school (ran on one feet to ground) or LANES.
Not only that, rebuilding or migration or added say a 4 or 5 drive to and existing array, builds faster with faster disks and WBC.
Each disk has a data sheet, I advice reading it first. (see what it really does) I'm a Seagate kinda guy (HP too is same) but see this.  (EXOS has 350million in the field since 2002)


Raid is not for backup , unless the system is the backup server?. (even remote servers located above the flood plane, sure...)
Raid can be for speed or... (gamers or video renderer's/editors, or special CAD/CAM/CAE? ,etc)
Raid may be for the reasons of production only, to assure the system never needs to go off line. ( Say disk 3 shows, SMART warning, you hot swap it in and never skip a beat (sure slower util fixed)
It can also be used to make faster SSD array's too, if rich.
Real RAID cards have the best SMART monitors in the business,  that is correct. 
The HP system monitors itself , the ARRAY has background monitors, in fact HP RAID card works as team with the onboard HDD SMART engines. (complex this is talk HP.com about it or DELL)
If uptime is of no concern to you , then buy a large SSD and forget you ever heard array or the word RAID. I bet it is plenty fast for your needs, 99% sure I am. (home users)
See how good a LSI-6361 card can do with SAS  , and write back cache enabled. (and has, battery or not)
How fast can it go, see below for that answser.
The rule: "Without write caching, RAID5 controllers write performance drop by a factor of 5-10 times. and .... LOL, " Don't leave home without it" (a pun based off old tv commericals)
If super wise the battery option added to....  (lacking a UPS AC power system (or box)) Use a real WBC. (fully opted)


RAID using, HDD SAS
here can get great performance, but never on Access time.
SSD wins that every time.(look ma now heads)
Random test mode, (bench) will be slow if you jump tracks (cylinders)
The truth is the HDD of say 5 in RAID 6 Array, the first drive that lands on the requested data,  wins the race.  (unless the read is flagged bad, so the smart array waits for then next respondind disk)
Pretend asking 10 people how to spell "THE",  the same effect ,the first one that answers wins, if one answer is Z%$#@|, then you ignore him and get the next respondant. (the error rate is 1 bit in 10E-16 reads)
The HDD repairs its own bad clusters. using spares. (100Byte ECC per cluster) and keep the drive error free for say 5 years, then runs out of spares, My HP sends me email that happened.
The HP RAID card, then also knows a HDD is bad and compenstates for it until you DO WHAT YOU WERE TOLD, in the EMAIL , (the  RAID card may even off line the bad disk)
HDD are short lived devices, ( compared to a processor that can last 100 years,we have them now still good  from 1971 (Intel invented it)

On the Web I  see this.
" I used eight 6TB Toshiba 7200RPM enterprise SATA drives connected to a IBM ServeRAID M5016 (LSI 9265-8i w/ 1gB CacheVault & supercap)."
RAID 10 !
With WBC , this is only possible.



Last:
RAID 0: SSD:
A good other  presentation is here, but on SSD only.

Do not read this below section it is only proof and shows what can be done inside a REAL G7 Server, jump past the below now (JUMP now)
The below seems ? off topic mostly, but not really,  and proves the CAGE above does work with Standard LSI raid cards, 9260s and 9360s are top model cards both. 93 is 12GB/S
If just doing tests, on drives, for say running test patterns on them (data and data bar , AAhex and then 55hex pattern is a pretty good pattern  1010-1010 and 0101-0101 binary)
All you need is an old LSI-9211 card , or its clones that cost nor more that $12 used.(as seen here, and the clones)
Photo #3 showing how any LSI card from LSI to Vargo works in my server.
Sorry no  it will not work with HP Array managment at all  (H220/220 will) , so you must use the free LSI MSM , manager.) but the ROM BIOS works here.
To do this magic you must add the express riser cards that have X4 slots or X16, and they mount horizontally, so make sure you have the correct J (L) brackes, if not correct.
In this View, the riser cage is pulled, and we must load  1 or 2  LSI cards to 2 HP PCIe   risers.
There are like 4 classes of express rizer cards, here is where it goes.
The Motherboard has 2  PCI riser  slots for 2 riser cards, as like this  with the riser cage up side down below.
Photo (drawing) #4: (my guess is all slots are express 2.0 only and max x8 lanes.) (lots of proof I have)
This is the  riser cage upside down, being  loaded with riser cards.HP Partnumbers for my G7, 057 is best. (missing 1 card here)

See this real photo of 2 huge PCI-e cards loaded.

The 057 card above, that I have, has two (qty2) x4 Slots that my SAS9260cv cards fits,  but is a x8 slot phy, and wired x4.
Oddly  HP put X8 slot connectors but wired them, , any x8 raid card here will only run at 4x speeds v2.0.  (x1 =
500 MB/s 1 per channel so x4 is 4times 500 or 2GB/s.
(but should be plenty  for most set for speed, array's even 10 SAS spining drives 250MB/s each))
Digging deeper in HP spec, sure it is listed clear as day.
If not ok, use the above slot x8 (note it is x16 physical) HP has no speck on this card I can find, sadly.
Some cards have larger heatsinks and huge cache ram and huge battery atop that... and in this case you need 2 risers, but my card 057 has X16 on other but wired as x8., so run it front and rear.
Below is  my LSI x8 card in  X4 socket, huh? It is silk screened in white x4 (but is 8x socket only) Note there are 2 same type sockets on this side of the Riser.
Photo #5:  (~057) in action. 
2.0 GB/s throughput (in  G7, x4 channels below)

If not sure of REAL express speeds now? (negotiated) this CLI  command does work , run lspci -vv  to see negotiated  all slots.     lspci also available for windows. (not just linux)

Data response:
LnkSta: Speed 2.5GT/s, Width x8
See the LinkSta: lines. 
LnkCap: Port #0, Speed 2.5GT/s, Width x16,(this line is potential speed)
Question? are there better risers , answer yes, HP revised of there G7 pages, (not all nor done right in the parts surfer)
Here: (HP likes to even change the partnumbers latter and add 1 card.) We have proof that Gen8 card ~326 runs great in a G7. (

The 326 is best of best 2 slots x8. (for RAID cards) as seen below, actual, the problem is , they are rare, and when found expensive $74 up.
If you look close it's a x16 slot but only has 8 channel pairs.
Hp uses the wrong, wording (per wiki)
"16 (8 mode)" one can clearly see 2row of 92 pins, 8x is 2x49 pins.
The ~323 is 16 wide (8 ch). (useful or video cards that run ok at x8 and only suck 75watts max (the riser must support 75watts, not one goes more)
PCIe-2.0 is 500 MB/s per lane so 8 is 4.0 GB/s (eg: plenty) As you can see any 8x raid cards would work better here, say running SSD array's/
PCIe-3.0 is 984.6 MB/s per lane, as seen on G8 server, not G7. A HP P420 RAid card would run best here.
Riser 326: (tested (crudely) in G7, sold for G8) The is risk in using wrong cards,  one example BIOS flash updates may fail. Try to know that many cards have and ID, and if and update does not like that ID, it may fail.

Learn that HP does not test all cards made for all generations of servers against all others gernerations, My PUN(1/2pun) they would be still testing and not
Selling severs.
(some have channel limits and or , bandwidth limits too. so works means little , ok?)





History on HP P410's (it is not LSI base SAS chip set, like H220/221 it is in fact a PCM chip)
PMC-Sierra, Inc. (Nasdaq:PMCS) today announced that its PM8011 SRC 8x6G 6Gb/s SAS RAID-on-Chip (RoC) with 6Gb/s SAS RAID platform to more than double the performance of existing RAID solutions.
The new (PMC) HP P212, P410, and P411 Smart Array RAID cards harness the full throughput of 6Gb/s SAS and 5Gb/s PCI Express 2.0
PMC is now owned by? not Skyworks (2016 Q1/2) but seems to be a bidding war and the winner was,  Microsemi Corp. MSCC, -0.12%
Microsemi is old  famous ADAPTEC  , so is now PCM. (less players now)
Avago now owns LSI  (was Broadcom). so them and Microsemi are now the big dogs in RAID (hardware)
Many cards can be what is called crossflashed. (if you learn what is on the card chip wise)


SAS life spans, are long,  even with a 5 year warranty, (RTM read the data sheet, all of it?)  I like SAS 15k best but here are my 10k's. (noobs 10 means 10,000 RPM) the faster the RPM the faster the disk.

My Savvio 10K.3 SAS 300GB ,(MTBF) of 1,600,000 hours, .  The drives last longer if cool (<=60f) cool and kept running 24/7. 
 (I do not use NL SAS "near line", only full Enterprise grade drives. NL is  (SATA + SAS communictions phy) NL is not true SAS.(how ever NL beats consumer grade SATA drives in quality/life spans)
What is confusing is that consumer disks are rated for low duty cycle not 24/7 like SAS enterprise is spec.'d.
Uses a peak max current each of 5v@ 0.6amps, and 12v@1.50 Amps, (so as you can see 12vdc is the limiting factor on PSU.  (we need 10 amps min,. 200 watt min, 300w ideal)
If running 16 drives you need  24amps on the 12vdc rail.
The Enterprise drives cost more per GB and that is for 2 reasons, speed, and life span (quality) run 24/7.
Think like this it lasts 10x longer than say that lame cheap SATA 2.5" in your cheap laptop (SATA drive)
The key points on life are, 30% dutyc cycle on consumer SATA drives versas 100% 24/365 duty of the Enerprise SAS drive , so when you lay to data sheets side by side, they can not be compared for life spans. (can't)
Error reporting is very deep on Enterprise drives. (systems)
"SAS also maintains its condition reporting to the controller, SATA drives do not have this capability, thus SATA disk degradation and pending-failures are not reported."

The SAS drives run background tests all the time, and report to the controller, the HP controller even flashes an LED to tell you bad things here, and even the HP manager emails you , drive 60  is near end of life, replace me now.
No matter if you have 1 drive or 8 or 16,  same level of quality here.
SAS is for quality, they do not need to compete on cheap levels, only life spans (and speeds and monitors) The newer systems run at 12Gb/second.
SATA is for CHEAP,  (cost per Gigabtye , is all that maters in  world of 90 day new PC warranties.
The counter point is , "we use SATA drives, and seem to be reliable (repeating 3-5year upgrades (all), and we are backed up ,so SATA wins for us."  Can't argue that.
Some folks with deeper pockets run SSD, RAID. (and everyone envy's them)
In the end analysis what you are using your server for (mode), is all that matters, (speed ,etc) and your pocket book.


Some faq questions: (Q & A)  (the longest story in the world, cut very short)
  1. What cards can I use in my G7 server ( vast types, LSI tops the list but HP420s would be a better choice,running HW RAID.)
  2. The P410 will not run in true HBA mode. (or JBOD , answer true)(cept, P410i HBA mode support was only released for Integrity servers, has a simple 1 bit flip to HBA)
  3. But you can set up 8  or 16, RAID 0 non arrays , single drives) this does work with ZFS. As seen here and for sure EWwrite posts.
  4. The P420 will do that at PCI-e V2.0 speeds in G7  Plus with the new firmware release you can auto RAID 0 format , say 8 disks at once.
  5. If I put in and LSI card ( not HPs H220 H221 that are LSI but modified by HP) will there be other limits? answer yes.
  6. OK what limits?, sure, Answer: LEDs on drive cage dead (acting odd), see photo #2 above for why (answer, missing i2c sidebands conn)
  7. Are there limits to access of S.M.A.R.T data?  not using HP parts,? sure many features will be dead.
  8. Will HP smart array managment on the none HP RAID card be dead, sure,. So run LSI MSM instead.
  9. Many folks want to run SW RAID, but the P410 is no good for that, so get P420 and setup HBA mode, P420 does that. (as does most LSI cards)
  10. Can I uses  fake RAID card, NO!  (avoid LSI- 9240-8i.)  this does not mean ZFS is fake by any means) I'm talking cards, only. Some cards are missing the true RAID ROC chip.
  11. What about batteries (&cache ) ? ok,  if you buy a real RAID card with 1GB cache and battery /SuperCAP option it will make WRITE speeds amazingly fast) I call this real HW RAID !!!
  12. If you abandon the P410i chip ports on the mobo, do not leave cables connected to them or damage can happen (as seen here.)
  13. HP raid cards,  are not LSI cards (core) ?  sure some are, the H221 is LSI-9207-8e but with custom HP firmware, the H221 is (LSI9207(sas2308) or unlucky the SAS2208.
  14. What is the down sides of RAID 0  for HPA, well the OS will see  dead drive as a  dead array and go nuts, and you'd have to reboot  a server, (anti production)
  15. Why is  raid card so Expensive , answer its worth it, and you can today buy the whole freakn server with one used for the same price as 1 card, why not do that ?
  16. What is Dual port SAS, this is  easy it has 2 PHY (means  , 2 physical ports) and is Port_B clearly seen at pins S8 to 14, and does 6G speeds.
  17. How do SAS Error LEDs work for free LED at pin P11 (HP does not use this pin, it is N/C there).
  18. What does  HP uses, answer it has its own complex LED logic (3 LED per drive, with 2 LED in the upper glow rod and 1 in the lower tray rod. (U1 and U2 PIC chips run these.)


PORTS:
See Port B here. (In all cases this one drawing identifies the interface , (as would 2.5/3.5" ,or RPM 5k,7k, 10k or 15k (15000 rpm) and the makers label then you read the data sheet on it)
All modern SAS drives have Port B, if not it is legacy junk.
Single port drives ended link 2012... (RIP) My HP backplane does both SAS and SATA.
  • Allows the drive to continue functioning if and when one port becomes nonfunctional (eliminates single point of failure).  (Each port is already full duplex)

  • Allows the drive to operate at 6GB/s instead of the regular 3GB/s (port combining for superior performance).
Note also that SATA does not support dual ports or full duplex, even on a controller that supports both SATA and SAS drives as ours do... You must use SAS drives to have these features.

This topic only servers to show that SATA drives and single port and dual port drives all look different per above.
Do not mix SATA and SAS in the same array, but can have 2 arrays one with 8 SATA and 8 SAS, sure can. (if they fit that is)

SATA is for PC (toy computers,etc) SAS drives are for Enterprise level performance and life spans. As they say, you do get what you pay for.


RAID Topologies, modes: (no comments on SDD here, banks of them)
If running SSD for only one goal speed , run RAID 0 or 1,  if you are backed up , go for max speed, and less drives.
If only
hard disks are in your budget, consider RAID 6 or 60 . (fresh and new drives, used ones invites 2 or more drives failed in a short span of time and doom)
RAID 5 is now consider, a bad idea ( mostly from folks running drive way longer the bath tube rules stated, (and you ignored)
Not shown is RAID 60 (or 6+0 ,named)  The photo below shows RAID 60 Topo.
The purpose of RAID is not backup, it's SPEED or it's for only 1 thing, production not going dead. 24/365 production. (eg web sites, or even local subnet support systems that can not go off line)
The will tell you 1 drive is not support that only means RAID can not use 1 drive, (R = Redundant so yes that is true)
But ? on most RAID cards you can add a RAID 0, signal drive and treat at an HBA a 1 drive Volume then add one more RAID 0  and now have 2 quasi HBA drives. (the HP 420 card has HBA mode(1 bit flip and yes)
This is an old chart but is short and sweet.


The below  is the "CLASSICAL" Bath tub  curve, that  tells you  not to run drives past the EOL , end of life. (the left side is called Infant Mortality failures)
Most wise and data proof,  sever farms use 5 year rule, and derate that based on how many they  will allow to fail at once, (labor costs mostly , imagine 100s failing per month in  Huge FARM.)

My examples do not cover, modern systems at all, (I'm not Paul Allen with unlimited cash  for quantity 8x, SAS 12TB HDD, or mission critical SAS X15 $300 each or lesser products up 8TB each)
($2500 for just the array? sorry ,no not me)
I do not cover here the realities of , all HDD today ,mostly are huge and how that dicates to them , what RAID to use. (big time differences what you are using for HDD.)
With Larger drives many shops set up RAID 10, (1+0)  This uses less (4 min.) larger disks than RAID 60,  and is a compromise on drive failures.
This RAID group can recover if any or all drives in a stripe fail, but not if both drives in a particular mirror fail.
Therefore, we can recover if drives 1 and 2 or drives 3 and 4 fail, but not if drives 1 and 3 or drives 2 and 4 fail. (a real compromise here,  so Raid 6 and 60 are really better, if you can afford them.)
Or learn to run them only 3 years, as many FARMS do.(spinners)




One more bench mark, with 4 drives. (built tested then taken down)
With 5x (RAID 5) 10k rpm, 2.5" drives no WBC I can get, 200Write and 230Read speeds, easy. about 1/2 speed of 1 ,SSD.  (ATTO APP)
By direct comparison here is my real server doing the same thing but with WBC (write back cache fitted)
The jump out and bite you here, is fast writes, the WBC magic. happens here. The OS writes it and bam it thinks its there, (magic of cache)
It is amazing how well spinners run well? with WBC. (P410i with the added 1GB cache and the added BATTERY BBU, (it;s really super CAP there, 3 FARADS)
These tests are Fun..... BTW, I can also put P410 card in the PIG above,  that is on my list of things already done.


WBC (WRITE BACK CACHE)
Hint of the day, never underestimate how fast a real raid card can do WRITES, with the full HP (Or LSI)  Write-back cache  card upgrade and battery (BBU),  (try it, and test it before you judge it , is smart than you know)
(never use WBC with no battery present, dead/bad,missing or just discharged , the HP and LSI software managers tell you it's ok or not, so just look.)
Not only that fact but the CACHE can be tuned for better write  speed or read speeds (your choice), with a  software lever there,  50% or to the left or the right.  (mine has 1GB of space there, I can tune)
On newer cards the BBU is really a huge 3-Farad capacitor, (it too must be allowed to fully charge before you go LIVE) When I say battery here, I mean both technologies. (battery or super CAP or flashback)

Best PDF manual on benchmarks and bottle necks for RAID.
LSI says:
"Write-Back

A caching strategy where write operations result in a completion status being sent to the host operating system as soon as data is in written to the RAID cache.
Data is written to the disk when it is forced out of controller cache memory.  (the cards ROC processor does this using no CPU cycles at all)

Write-Back is more efficient if the temporal and/or spatial locality of the requests is smaller than the controller cache size. 
(In my case 1GB cache module on card)

Write-Back is more efficient in environments with “bursty” write activity. (less than 1GB)  (Do not forget it can be tuned)


Battery backed cache can be used to protect against data loss as a result of a power failure or system crash"
. ( not having this, can be super  risky, if you lost power in the middle of WRITE block , how can that be ever good?)

To all the lucky guys, who lost power and no lost data?, I say good for you, luck does happen. Sure.
So does does texting like and idiot while driving. (or walking across the street blind folded) (why brag?)
I say keep it off the cacheback., lacking the battery, I see Intel controllers now force this issue. (off)
The WBC allows all other modes in the ROC (Raid on a chip) to run super fast,  like when you replace a bad RAID 6 disk, the rebuild does not take days, only hours.  (consider this fact) same with migrations, Resync, and other RAID jobs you do, with the ROC brain) 
The HP raid card you can move all drives from system 1 to system 2,  even making the bad error not putting numbers on each drive (flarepen) Disk1, bay2. etc. then the controller sees your error and rebuild the whole thing back to good (WBC wins here everytime, LSI calls it CV cache vault (tm)) 
Before switching from HW to SW raid brains? consider all advantages of HW RAID first, for sure the issue of Virus infecting any SW RAID.  (long and hard thinking, then go wild)

The best advice will be, to think about service issues first, not cost of a silly $100 RAID card.  (the above cost me $40)  $100 gets a better card with WBC, above is just testing on my cheapest card in my spares kit.
Features like: (besides vast array types, even RAID 60.
Learn to ask how can I recover, given all classes of failures or usage modes? . (1 to 3 drives fail?, ) or migrating or expanding the array ?
  • Online Capacity Expansion (OCE)
    • Online RAID Level Migration (RLM) (or migrating drives to a new system or did so out of order, or to a newer controller and Resycn speeds, hours or is DAYs ok or drive expansion.(oops check that out)
    • Auto resume after loss of system power during array rebuild or reconstruction (RLM)
    • Single controller Multipathing
    • Load Balancing
    • Configurable stripe size up to 1 MB
    • Fast initialization for quick array setup
    • Check Consistency for background data integrity
    • SSD Support with SSD Guard™ technology
    • Patrol read for media scanning and repairing 
    • 64 logical drive support
    • DDF compliant Configuration on Disk (COD)
    • S.M.A.R.T. support
    • Global and dedicated Hot Spare with Revertible Hot Spare support
    – Automatic rebuild  (resync's that don't take DAYS to do.)
  • In a recent test, Nytro MegaRAID showed an almost 4:1 improvement in RAID rebuild timeover the same workload and configuration without a cache. (Imagine 6 hour job is now 24hours and can be far far worse, no cache)
    – Enclosure affinity
    – Emergency SATA hot spare for SAS arrays
  • more there are lots more features here if you do research on long term service support and (what happens if bla-bla fails? ) seems to me many folks fail to do this job.
  • HW raid cards 2012 can be moved or even changed from Linux to Windows, or reverse, and the array is good to go., (try that with SW RAID !)  The RAID card driver works on all Operating systems; for sure 99%)
  • Last and not the least, how does getting virus work for you , in your Software RAID, that was free and the major reason for using it.
One of my most favorite (2018) guides, not HP, or DELL is LSI SELECTOR GUIDE seen here,  very very good.
For those that say HW RAID is dead, think again,  ever try CACHECADE? $250 is not cheap but shows you how a SSD small and cheap can add huge gains. (Cachecade is  spring board to future SSD full arrays)

The Pissers and the Moaners : ( as seen endlessly on all social networks)
They cry their eyes out when the ARRAY goes dead. (wow , zero planning for what will happen, for sure!) no? , yes! (As they say Sheet happens)
  • So they kill power (or reboot) the server, while it rebuilding the array. wow. Then ask how can I know this fact? well did you lose your MSM program ?  You run RAID blind? (worse yet MSM is free, why wait ,get it.)
  • Not backed up. (omg ,oh my golly why? not) If you were , you'd not be crying now.
  • Not running RAID6 or 60
  • Using 5 year old HDD or older, really why? ( if one failed, at 5 years, why not expect #2 to fail NOW..)
  • Not using  WBC , write back cache or Cache vault. (then wordering why the rebuild , resync, is slow.) It will be. As will be production running slow. (for the same reasons)
  • Not using SAS 2.5" 15k Drive, that are fast, in all ways, for sure rebuilding the array, not just production speeds.  15K RPM means it reads 1 track faster, the light mass 2.4" head arms move faster (access time) yah , best.
  • No  backup server  , fail over,  servers.  Really?  If running production this is the 1st thought not the  last in a panic.
  • What if the server actually died, (they do!) but the ARRAY and say the RAID card is ok or not, you can move the drives to the good spare but empty server, and the (example) HP Raid card will accept it no issues, are you ready?
  • Speed cry 1,  rob a bank and buy SSD arrays?  no, too expensive, see next line.
  • Speeds not up to your standards (cry 2) then get a new  Avago card, with Cachecade setup with 1 cheap SSD +raid SAS array. (magic no? study it) This is good stop gap measure for speed.(production performance)
  • And last I can't afford real hardware like this, then,  then use the cloud or just use  5TB drive, only,  and no RAID and when it fails the backup saves your bacon, but production goes down for the recovery.
  • If unsure?, ask for help, do no guess, get help. (do not do something wrong, making things worse as many do, please)
Cloud storage can be 1TB for 1 year at $15( when on sale) $5 a month his par.
Why build at all, unless speed matters.
If speed matters then buy $100-$200 HP G7 server (in glut status prices falling) and build what you want and need, there are no limits, it will be faster than your cable box cable WAN can go.
I own 2,  and the backup is off line, waiting for need,
All data is on BR-D blue ray disk recordable. 
All data too, is on backup 1TB drives, parked off line. (eSata boxes)
The cloud has 2 advantages, all data encrypted,  and secure, and can even be used as  backup solution.
I am also backed up here.

Have a plan, only speed and costs matter, backup is assumed, nobody serious has no backup plan.
As Bill Gates might say, "How fast to you want to go today"

MSM means, the RAID ARRAY Manager for LSI(now Avago) its free, or HP SAA, app, or Dells array managers with different names but does the same thing.
MSM = MegaRaid Storage Manager.  (it even Emails you , says drive 3 is going South now, replace it or you will be sorry.)

version 2.  5-12-2018 (success day)