My Experiment to use a HP SAS hard drive cage, as mobile or external to any Alien PC or Server .      
And not buy
HP StorageWorks MSA60 Modular Smart Array 418408-B21 ! ( @$170 used)         

Along a similar vein,  I cover, can  a LSI non HP raid card work in my HP Proliant DL380 G7 like seen here.    G7 means Generation 7 servers.
In fact some HP RAID cards are HP modified, (FW) LSI cards, two of note are...
H220 H221

Can I use the  HP 410 card in my alien PC,  sure you can. (with minor limits) even better is P420  (it can do HBA on Demand"flip one bit, bingo")
INDEX:  Power adaptor
Secret Jack wiring
Wrong card in real HP server
FAQ (short)
RAID TOPO choices
What Ports do I have, how to tell.
How long do HDD last.
Bench marks. (hdd only+WBC)
My project in details.
SAS heating facts.
PSU hacks (for my project)
Cry Babies. (my array is dead, boo hoo)

This is not any form of production of any kind  , it is 100% OFF LINE usage.
Targeted: (a lab Server, or test server, random PC for testing HDD (SAS)"SMART tests"  and testing SFF cables and testing, RAID cards of any kind, even HBA cards)
I can even  burn-in HDD and RAID cards to see if they are worthy of future usage. (or do bench marks on devices to see if they perform as expected)


I can do this even using  a $5 old PC used with a x8 PCI-e express slot or larger. x16. 
BOM (parts list / bill of materials):
  • A test PC of any kind (with x8 or wider express slots free, hint if bench marking SAS drives, be sure all express lanes are active, and for sure testing SSD,  Some PC limit lanes to the RAID card )
  • A spare PSU, 300 a watts can be plenty for SAS drives.  one drive has 1.5A dynamic current needs. (so if 8 drives, times 8 that)  (startup current are not a problem if you set the controller to sequence them...)
  • HP card cage with backplane G7 ($25 used)  SFF cage.2.5"
  • HP power cable for  above.$5. 
  • $2 PSU extension DC power cable cut in 1/2 and soldered to above per my instructions below. (excess pins removed too to help better air flows)
  • Normal HP or standard, SFF-8087 data cables 1 or 2 depends if running 4 or 8 drives. (connect both up is best ) HP 493228-005 (cable)
  • Cooling the HDD must not run over 60deg C (140f). ever or life span will be less (MTBF) see how I do that below with a tunnel.  The temp spec, is 5C to 55C, (60C is max case temp ever) 55C is MTBF rule.
  • Some metal or plastic panels or even simple plywood is much easier to build, to form a forced air wind tunnel from SFF cage to PSU, with PSU fan doing all cooling and necessary. (even a newer faster fan , if need be)
  • Any RAID card in the PC (even a SAS HBA card) , with 2 SFF-8087 JACK's{ports} (if only 1 jack , you get 4 drives only support)

My story (short):
I wanted, to be able to connect it to any of my spare raid cards. (many)
Even do SSF HDD tests, on them.
I did not like the price, of say the Supermicro  CSE-M14  or m28 cages ( $100 to 200 smackers cost or more)
So I bought a used  HP cage same as what is inside my current G7 server and all parts seen in the BOM above.
I do not need most of this to test one drive , its is for testing a full array of 3 to 8 drives. (and not overheat them) (1 SAS 15k ,laying on a bench will overheat too, so...)
An alternate to all this is just buy 2nd server for $100 used, and name it lab-test only. ( G7 servers are best and dirt cheap now)

See  photos of all key parts below.

The Card cage uses regular SFF-8087 cables, end to end so is perfect.(or HPs) (SFF = Small, Form Factor)
I will test it with my spare, P410 card ,and my LSI-9260 card.(SAS9260)
Next I will,  report all issues, discovered and beyond  this FAQ list:

If you only need 1 to 4 drives only one SFF cable need only be connected to J1, slot 1 to 4 left bay) J6 cable runs slot 5 to 8 right bay.(confirmed by me)
Now the wiring, of my custom power cable. (all wires soldered then heat shrunk tubing used)

POWER to the HP SFF BACK PLANE: ( do not make an error here or boom, blown up drives, test it first using DMM meter then next  a very old  but good 60GB SAS drive, as seen for $5 on Fleabay)
Super simple wiring.
Left is HP 10 pin Molex jack, and right is industry standard 24pin ATX (wiki ATX spec 24pins) Jack.

1/5-----+12vdc------10/11 = yellow
2/4/7---gnd--------   3/5/7   = black
6/10----- +5v------  -4/6       +red
See photo2  below for pinouts on HP end. (a HP secret)
You also need a ground jumper wire from the PC under test, and the Cage under test. (because there are 2 PSU's now and most be common line grounded)  (AWG #18 or larger)

More Details.

The only hard part is cooling. (and if you don't THEY  overheats fast )  They means both the SAS drives and the RAID card it self.
The server sucks air from the front drive caddy's and then there are these 6 cute slots in the backplane PCB,  how to mimic this is not too hard to do, with some sheet metal or plasic panels.

The best way to cure this problem  is to  form up  a  box tunnel , that puts the ATX PSU, on the rear of the cage and let the PSU FAN , suck air just like the server real did.  (why buck the design by HP)
A box tunnel need not be 100% air tight. (the 60f max drive case temp is the rule) (below 55c is the rules stated by MBTF life span)
There are small holes in the  cage already, but can not be used, they would hit the drives.
One hitch is the power jack is in the wrong spot, points left, so I put a blister (cut) on the new sides, so power can feed it easy.
Then large grommet for my new data cable below:(seen only at the end of this page photos)
Cable ssf-8087  (36pins)
or real HP, 493228-005 SFF data cable ($5 used at fleabay) and supports HP sidebands. (for cage errors,etc)
The SFF data cables that are standard are here.SFF-8087

Photo #1 (out of my G7 Proliant (DL380) This cages is not working with my SAS9261  RAID card and running AVAGO MSM manager,  I created and ARRAY volume and it works perfectly with  nice surprise the LED work too.

Photo #2: Below:  (confirmed by me to work)
The secret HP Molex jack on above, 
backplane , clearly marked J13 , in this Application pins 3 /8/9 will be dead, and LED logic will be dead too. (I2C sideband logic, 100% Custom to HP only)
I checked the Molex to PCB wiring very carefully. (I did many careful continuity checks)
The i2C pins talk to the U1 and U2 chips, for at least LED logic. (3 LED per drive, green /amber , activity , errors and predictive failures, and off line drives etc.) All these features (LED) will be dead.
Those 2 chips (now off line) may also be part of HPs, SAS background , monitors. (hinted at in the manuals but never detailed how its done in total, not one block diagram to understand this...)

I shorted, pin15 to 16 (turns it on), and wired, 3 each, ground , 5V and 12vdc lines the above by same names.  Bingo, works.
The resistors mentioned above are not needed at all. 

Do not read this below section it is only proof and shows what can be done inside a REAL G7 Server, jump past the below now (JUMP now)
The below seems ? off topic mostly, but not really,  and proves the CAGE above does work with Standard LSI raid cards, 9260s and 9360s are top model cards both. 93 is 12GB/S
If just doing tests, on drives, for say running test patterns on them (data and data bar , AAhex and then 55hex pattern is a pretty good pattern  1010-1010 and 0101-0101 binary)
All you need is an old LSI-9211 card , or its clones that cost nor more that $12 used.(as seen here, and the clones)
Photo #3 showing how any LSI card from LSI to Vargo works in my server.
Sorry no  it will not work with HP Array managment at all  (H220/220 will) , so you must use the free LSI MSM , manager.) but the ROM BIOS works here.
To do this magic you must add the express riser cards that have X4 slots or X16, and they mount horizontally, so make sure you have the correct J (L) brackes, if not correct.
In this View, the riser cage is pulled, and we must load  1 or 2  LSI cards to 2 HP PCIe   risers.
There are like 4 classes of express rizer cards, here is where it goes.
The Motherboard has 2  PCI riser  slots for 2 riser cards, as like this  with the riser cage up side down below.
Photo (drawing) #4: (my guess is all slots are express 2.0 only and max x8 lanes.) (lots of proof I have)
This is the  riser cage upside down, being  loaded with riser cards.HP Partnumbers for my G7, 057 is best. (missing 1 card here)

See this real photo of 2 huge PCI-e cards loaded.

The 057 card above, that I have, has two (qty2) x4 Slots that my SAS9260cv cards fits,  but is a x8 slot phy, and wired x4.
Oddly  HP put X8 slot connectors but wired them, , any x8 raid card here will only run at 4x speeds v2.0.  (x1 =
500 MB/s 1 per channel so x4 is 4times 500 or 2GB/s.
(but should be plenty  for most set for speed, array's even 10 SAS spining drives 250MB/s each))
Digging deeper in HP spec, sure it is listed clear as day.
If not ok, use the above slot x8 (note it is x16 physical) HP has no speck on this card I can find, sadly.
Some cards have larger heatsinks and huge cache ram and huge battery atop that... and in this case you need 2 risers, but my card 057 has X16 on other but wired as x8., so run it front and rear.
Below is  my LSI x8 card in  X4 socket, huh? It is silk screened in white x4 (but is 8x socket only) Note there are 2 same type sockets on this side of the Riser.
Photo #5:  (~057) in action. 
2.0 GB/s throughput (in  G7, x4 channels below)

If not sure of REAL express speeds now? (negotiated) this CLI  command does work , run lspci -vv  to see negotiated  all slots.     lspci also available for windows. (not just linux)

Data response:
LnkSta: Speed 2.5GT/s, Width x8
See the LinkSta: lines. 
LnkCap: Port #0, Speed 2.5GT/s, Width x16,(this line is potential speed)
Question? are there better risers , answer yes, HP revised of there G7 pages, (not all nor done right in the parts surfer)
Here: (HP likes to even change the partnumbers latter and add 1 card.) We have proof that Gen8 card ~326 runs great in a G7. (

The 326 is best of best 2 slots x8. (for RAID cards) as seen below, actual, the problem is , they are rare, and when found expensive $74 up.
If you look close it's a x16 slot but only has 8 channel pairs.
Hp uses the wrong, wording (per wiki)
"16 (8 mode)" one can clearly see 2row of 92 pins, 8x is 2x49 pins.
The ~323 is 16 wide (8 ch). (useful or video cards that run ok at x8 and only suck 75watts max (the riser must support 75watts, not one goes more)
PCIe-2.0 is 500 MB/s per lane so 8 is 4.0 GB/s (eg: plenty) As you can see any 8x raid cards would work better here, say running SSD array's/
PCIe-3.0 is 984.6 MB/s per lane, as seen on G8 server, not G7. A HP P420 RAid card would run best here.
Riser 326: (tested (crudely) in G7, sold for G8) The is risk in using wrong cards,  one example BIOS flash updates may fail. Try to know that many cards have and ID, and if and update does not like that ID, it may fail.

Learn that HP does not test all cards made for all generations of servers against all others gernerations, My PUN(1/2pun) they would be still testing and not
Selling severs.
(some have channel limits and or , bandwidth limits too. so works means little , ok?)

Some faq questions: (Q & A)  (the longest story in the world, cut very short)
  1. What cards can I use in my G7 server ( vast types, LSI tops the list but HP420s would be a better choice,running HW RAID.)
  2. The P410 will not run in true HBA mode. (or JBOD , answer true)(cept, P410i HBA mode support was only released for Integrity servers, has a simple 1 bit flip to HBA)
  3. But you can set up 8  or 16, RAID 0 non arrays , single drives) this does work with ZFS. As seen here and for sure EWwrite posts.
  4. The P420 will do that at PCI-e V2.0 speeds in G7  Plus with the new firmware release you can auto RAID 0 format , say 8 disks at once.
  5. If I put in and LSI card ( not HPs H220 H221 that are LSI but modified by HP) will there be other limits? answer yes.
  6. OK what limits?, sure, Answer: LEDs on drive cage dead (acting odd), see photo #2 above for why (answer, missing i2c sidebands conn)
  7. Are there limits to access of S.M.A.R.T data?  not using HP parts,? sure many features will be dead.
  8. Will HP smart array managment on the none HP RAID card be dead, sure,. So run LSI MSM instead.
  9. Many folks want to run SW RAID, but the P410 is no good for that, so get P420 and setup HBA mode, P420 does that. (as does most LSI cards)
  10. Can I uses  fake RAID card, NO!  (avoid LSI- 9240-8i.)  this does not mean ZFS is fake by any means) I'm talking cards, only. Some cards are missing the true RAID ROC chip.
  11. What about batteries (&cache ) ? ok,  if you buy a real RAID card with 1GB cache and battery /SuperCAP option it will make WRITE speeds amazingly fast) I call this real HW RAID !!!
  12. If you abandon the P410i chip ports on the mobo, do not leave cables connected to them or damage can happen (as seen here.)
  13. HP raid cards,  are not LSI cards (core) ?  sure some are, the H221 is LSI-9207-8e but with custom HP firmware, the H221 is (LSI9207(sas2308) or unlucky the SAS2208.
  14. What is the down sides of RAID 0  for HPA, well the OS will see  dead drive as a  dead array and go nuts, and you'd have to reboot  a server, (anti production)
  15. Why is  raid card so Expensive , answer its worth it, and you can today buy the whole freakn server with one used for the same price as 1 card, why not do that ?
  16. What is Dual port SAS, this is  easy it has 2 PHY (means  , 2 physical ports) and is Port_B clearly seen at pins S8 to 14, and does 6G speeds.
  17. How do SAS Error LEDs work for free LED at pin P11 (HP does not use this pin, it is N/C there).
  18. What does  HP uses, answer it has its own complex LED logic (3 LED per drive, with 2 LED in the upper glow rod and 1 in the lower tray rod. (U1 and U2 PIC chips run these.)

See Port B here. (In all cases this one drawing identifies the interface , (as would 2.5/3.5" ,or RPM 5k,7k, 10k or 15k (15000 rpm) and the makers label then you read the data sheet on it)
All modern SAS drives have Port B, if not it is legacy junk.
Single port drives ended link 2012... (RIP) My HP backplane does both SAS and SATA.
  • Allows the drive to continue functioning if and when one port becomes nonfunctional (eliminates single point of failure).  (Each port is already full duplex)

  • Allows the drive to operate at 6GB/s instead of the regular 3GB/s (port combining for superior performance).
Note also that SATA does not support dual ports or full duplex, even on a controller that supports both SATA and SAS drives as ours do... You must use SAS drives to have these features.

This topic only servers to show that SATA drives and single port and dual port drives all look different per above.
Do not mix SATA and SAS in the same array, but can have 2 arrays one with 8 SATA and 8 SAS, sure can. (if they fit that is)

SATA is for PC (toy computers,etc) SAS drives are for Enterprise level performance and life spans. As they say, you do get what you pay for.

RAID Topologies, modes: (no comments on SDD here, banks of them)
If running SSD for only one goal speed , run RAID 0 or 1,  if you are backed up , go for max speed, and less drives.
If only
hard disks are in your budget, consider RAID 6 or 60 . (fresh and new drives, used ones invites 2 or more drives failed in a short span of time and doom)
RAID 5 is now consider, a bad idea ( mostly from folks running drive way longer the bath tube rules stated, (and you ignored)
Not shown is RAID 60 (or 6+0 ,named)  The photo below shows RAID 60 Topo.
The purpose of RAID is not backup, it's SPEED or it's for only 1 thing, production not going dead. 24/365 production. (eg web sites, or even local subnet support systems that can not go off line)
The will tell you 1 drive is not support that only means RAID can not use 1 drive, (R = Redundant so yes that is true)
But ? on most RAID cards you can add a RAID 0, signal drive and treat at an HBA a 1 drive Volume then add one more RAID 0  and now have 2 quasi HBA drives. (the HP 420 card has HBA mode(1 bit flip and yes)
This is an old shart but is short and sweet.

The below  "CLASSICAL" Bath tub  curve tells you  not to run drives past the EOL , end of life. (the left side is called Infant Mortality failure)
Most wise and data proof,  sever farms use 5 year rule, and derate that based on how many they  will allow to fail at once, (labor costs mostly , imagine 100s failing per month in  Huge FARM.)

My examples do not cover, modern systems at all, (I'm not Paul Allen with unlimited cash  for quantity 8x, SAS 12TB HDD, or mission critical HDD $300 each or lesser products up 8TB each)
($2500 for just the array? no not me)
I do not cover here the realities of , all HDD today ,mostly are huge and how that dicates to me , what RAID to use. (big time differences what you are using for HDD.)
With Larger drives many shops set up RAID 10, (1+0)  This uses less (4 min.) larger disks than RAID 60,  and is a compromise on drive failures.
This RAID group can recover if any or all drives in a stripe fail, but not if both drives in a particular mirror fail.
Therefore, we can recover if drives 1 and 2 or drives 3 and 4 fail, but not if drives 1 and 3 or drives 2 and 4 fail. (a real compromise here,  so Raid 6 and 60 are really better, if you can afford them.

This RAID chart is not accurate above on the topic of speed, when you use a 1GB  battery  backed up cache on the card.  (get the CACHE VAULT then show bench marks)
Many posts seen online,  tell you it's slow, to write, and is full of beans.    (if the bursts last less than 1GB,  my speeds are very fast)
Hint 1 do your own tests, do not listen to others that 1 have never seen a WBC write back cache in their life, nor have they tested with there production, IO rates. ever.
Not only that , there is the CACHCADE device that puts a SSD drive cache in the middle of all this, and will keep the most popular data  ready parked in SSD. (like magic)
ATTO ,Must be run in Admin mode and some need w7 Compatible mode set.
The performance will be best with 15k drives, SAS ( excluding SSD no way to beat that?) using many drives in parallel in RAID 0 is super fast. (and must have a huge cache option and battery on card)

Raid is not for backup , unless the system is the backup server?. (even remote servers located above the flood plane, sure...)
Raid is for speed or... (gamers or video renderer's/editors, or special CAD/CAM/CAE? ,etc)
Raid may be for the reasons of production only, to assure the system never needs to go off line. ( Say disk 3 shows, SMART warning, you hot swap it in and never skip a beat (sure slower util fixed)
It can also be used to make faster SSD array's too, if rich.
Real RAID has the best monitors in the business,  that is correct. 
The HP system monitors itself , the ARRAY has background monitors, hell even first invented for SAS, and HP uses all features of the DRIVES internernal tests and adds its own.
If uptime is of no concern to you , then buy a large SSD and forget you ever heard array or the word RAID. I bet it is plenty fast for your needs, 99% sure I am. (home users)
See how good a LSI6361 card can do with SAS  , and write back cache enabled. (and has, battery or not)
How fast can it go, see below for that answser.
The rule: "Without write caching, RAID5 controllers write performance drop by a factor of 5-10 times. and .... LOL, " Don't leave home without it" (a pun based off old tv commericals)
If super wise the battery option added to....  (lacking a UPS AC power system (or box))
A good other  presentation is here, but on SSD only.

here can get great performance, but never on Access time. SSD wins that every time. (after all it a class of  slow RAM (super fast EEPROM) and not rotating memory cylinder (sectors)
Random test mode, (bench) will be slow. On hdd it must move that slow combersom head, but the first drive of 10 say, will shorting this time a bit, (odds are)
Pretend asking 10 people how to spell "THE",  the same effect. (faster than  one person) but never faster than say  a smart fast robot.
Once ananswer (read request) is found,  the speed will be limited to  of the RPM of the cylinder (velocity),  if and only if the answer fits on one cylinder (track) Watch  simulaton of  HDD, it is very simple in basics.
On the Web is see this.
" I used eight 6TB Toshiba 7200RPM enterprise SATA drives connected to a IBM ServeRAID M5016 (LSI 9265-8i w/ 1gB CacheVault & supercap)."
Access time is not going to be fast, with any HDD but this is pretty good.

History on HP P410's (it is not LSI base SAS chip set, like H220/221 it is in fact a PCM chip)
PMC-Sierra, Inc. (Nasdaq:PMCS) today announced that its PM8011 SRC 8x6G 6Gb/s SAS RAID-on-Chip (RoC) with 6Gb/s SAS RAID platform to more than double the performance of existing RAID solutions.
The new (PMC) HP P212, P410, and P411 Smart Array RAID cards harness the full throughput of 6Gb/s SAS and 5Gb/s PCI Express 2.0
PMC is now owned by? not Skyworks (2016 Q1/2) but seems to be a bidding war and the winner was,  Microsemi Corp. MSCC, -0.12%
Microsemi is old  famous ADAPTEC  , so is now PCM. (less players now)
Avago now owns LSI  (was Broadcom). so them and Microsemi are now the big dogs in RAID (hardware)
Many cards can be what is called crossflashed. (if you learn what is on the card chip wise)

SAS life spans, are long,  even with a 5 year warranty, (RTM read the data sheet, all of it?)  I like SAS 15k best but here are my 10k's. (noobs 10 means 10,000 RPM) the faster the RPM the faster the disk.

My Savvio 10K.3 SAS 300GB ,(MTBF) of 1,600,000 hours, .  The drives last longer if cool (<=60f) cool and kept running 24/7. 
 (I do not use NL SAS "near line", only full Enterprise grade drives. NL is  (SATA + SAS communictions phy) NL is not true SAS.(how ever NL beats consumer grade SATA drives in quality/life spans)
What is confusing is that consumer disks are rated for low duty cycle not 24/7 like SAS enterprise is spec.'d.
Uses a peak max current each of 5v@ 0.6amps, and 12v@1.50 Amps, (so as you can see 12vdc is the limiting factor on PSU.  (we need 10 amps min,. 200 watt min, 300w ideal)
If running 16 drives you need  24amps on the 12vdc rail.
The Enterprise drives cost more per GB and that is for 2 reasons, speed, and life span (quality) run 24/7.
Think like this it lasts 10x longer than say that lame cheap SATA 2.5" in your cheap laptop (SATA drive)
The key points on life are, 30% dutyc cycle on consumer SATA drives versas 100% 24/365 duty of the Enerprise SAS drive , so when you lay to data sheets side by side, they can not be compared for life spans. (can't)
Error reporting is very deep on Enterprise drives. (systems)
"SAS also maintains its condition reporting to the controller, SATA drives do not have this capability, thus SATA disk degradation and pending-failures are not reported."

The SAS drives run background tests all the time, and report to the controller, the HP controller even flashes an LED to tell you bad things here, and even the HP manager emails you , drive 60  is near end of life, replace me now.
No matter if you have 1 drive or 8 or 16,  same level of quality here.
SAS is for quality, they do not need to compete on cheap levels, only life spans (and speeds and monitors) The newer systems run at 12Gb/second.
SATA is for CHEAP,  (cost per Gigabtye , is all that maters in  world of 90 day new PC warranties.
The counter point is , "we use SATA drives, and seem to be reliable (repeating 3-5year upgrades (all), and we are backed up ,so SATA wins for us."  Can't argue that.
Some folks with deeper pockets run SSD, RAID. (and everyone envy's them)
In the end analysis what you are using your server for (mode), is all that matters, (speed ,etc) and your pocket book.

Back to my project:
 is my PSU to CAGE tunnel chamber air seal, the  methods I will always  use a mix up of parts doing this for sure with materials in my attic, and wood and metal laying about.
Photos...below are , progress...  The duct tape test only to prove concept of will not overheat,  Max measured 46C (hot room 78f) in cool room far less it will be.
What I did is make a plywood base for both  the PSU and CAGE as one. (screwed down)
Then added sides. (one can use card board, first as crude test, to be sure the drives run below 60F ( if that fails put in a faster FAN inside the PSU#2 more CFM ! RPM max)
Using MSM  program see drive cage status and temps, this one is 3 drive RAID, setup and runs perfectly, speed tests 140MB/s.
Get MSM from Broadcom. (finding the correct file there is tricky)  (a Direct URL link to MSM is here) Best is to look for the most current MSM and grab it.  The one here supports vast numbers of LSI cards. Even 9211.

HP has the same application SSA ,it is  even a smarter program and free too at  why run RAID blind.?

Snap shots here of the download pick.  (photo views)  first is my MSM live down load, then next, is  my thermal testing, last will be far better casings (my tunnel).

First tests after getting the thing to work great digitally, using MSM only.

My costs are small (under $50) Total. (not counting RAID card after all how can you run server lacking such a card)? HBA cards or real RAID (up to 8 drives at once)
Using cardboard first to test my theory on air flow, with duct tape.

The thermal issues,  that can not be ignored.  The 55c rule, If you ignore these rules,  the drives will burn up. They get so hot your hand hurts touching them. (in less than 1 hour time and for sure bundled in a cluster)
In my rat nasty , hot Garage.
The LSI/SAS card below over heats fast if you fail to use this fan, A real server has  no problems here , only cheap home PC lack proper cooling. (but 3 case fans would work here too) I want it to work sides off., so...
My test PIG,  That which I torture.

HDSentinel.exe running below (so far 1hr later 46c max) In a proper server room at 20C the temp would be 41c)
 DO NOT OVERHEAT YOUR DRIVES nor the RAID CARD. The boot drive "Kingston" is SSD. The 3 below are RAID.  (Learn that overheating it can and will and cause damage to the electronics)
The program is too expensive to buy (IMO) , mine runs on only one PC but has the only working SMART test that can see Through the RAID cards IO ports.
 (they have a list of support cards)
 I do not like the $20-30 per PC license rules (but see this 5PC license) (this Graph mode below is peachy keen, no? as are the great stats there too, trends, etc.)

The END RUN IS HERE, end of the story and project below:
The Purpose
here is  to learn what works and test and diagnose,
How to make my HP spare RAID CAGE work in any ALIEN PC, or other workstation. (or even using any Alien RAID CARDS (Alien means NOT HP system usage )
To test and burn in any SAS HDD, or run full diagnosis on suspected HDD. (SAS)
If you use a  HBA card you can also do full SMART tests, even run linux  disktest and its SMART test  run wndows crystaldiskinfo.exe.
Results, BINGO it all works perfectly.
MORE TO COME, HERE, I AM adding my minor modified PSU next ,then I will finish my tunnel using 0.062" FR4 (fire proof grade4) copper clad PCB raw  material.
This material is easy to hot  glue, screw, drill and even solder the corners to form any box like structures.
Not only is that good, (not pretty) but super good as a Faraday shield  as any HAM radio DIY maker knows.  (you can build like crazy with few tools) (I first learn this in and engineering lab in 1979)
See this great job done here >, called Manhattan style construction.(back when I was young we called this making SPACE modules (or DEAD DOG Style with chips glued down legs up (dead) in this way)
My tunnel will be near like this., my wind tunnel.
In my case there are only 3 sides. not 5  and will be bracket screwed down to a the Plywood base.

The last picture is below  ,  ugly green fiber glass, you could buy blue if you want.

WIP (work in progress)

Hint of the day, never underestimate how fast a real raid card can do WRITES, with the full HP (Or LSI)  Write-back cache  card upgrade and battery (BBU),  (try it, and test it before you judge it , is smart than you know)
(never use WBC with no battery present, dead/bad,missing or just discharged , the HP and LSI software managers tell you it's ok or not, so just look.)
Not only that fact but the CACHE can be tuned for better write  speed or read speeds (your choice), with a  software lever there,  50% or to the left or the right.  (mine has 1GB of space there, I can tune)
On newer cards the BBU is really a huge 3-Farad capacitor, (it too must be allowed to fully charge before you go LIVE) When I say battery here, I mean both technologies. (battery or super CAP or flashback)

Best PDF manual on benchmarks and bottle necks for RAID.
LSI says:

A caching strategy where write operations result in a completion status being sent to the host operating system as soon as data is in written to the RAID cache.
Data is written to the disk when it is forced out of controller cache memory.  (the cards ROC processor does this using no CPU cycles at all)

Write-Back is more efficient if the temporal and/or spatial locality of the requests is smaller than the controller cache size. 
(In my case 1GB cache module on card)

Write-Back is more efficient in environments with “bursty” write activity. (less than 1GB)  (Do not forget it can be tuned)

Battery backed cache can be used to protect against data loss as a result of a power failure or system crash"
. ( not having this, can be super  risky, if you lost power in the middle of WRITE block , how can that be ever good?)

To all the lucky guys, who lost power and no lost data?, I say good for you, luck does happen. Sure.
So does does texting like and idiot while driving. (or walking across the street blind folded) (why brag?)
I say keep it off the cacheback., lacking the battery, I see Intel controllers now force this issue. (off)
The WBC allows all other modes in the ROC (Raid on a chip) to run super fast,  like when you replace a bad RAID 6 disk, the rebuild does not take days, only hours.  (consider this fact) same with migrations, Resync, and other RAID jobs you do, with the ROC brain) 
The HP raid card you can move all drives from system 1 to system 2,  even making the bad error not putting numbers on each drive (flarepen) Disk1, bay2. etc. then the controller sees your error and rebuild the whole thing back to good (WBC wins here everytime, LSI calls it CV cache vault (tm)) 
Before switching from HW to SW raid brains? consider all advantages of HW RAID first, for sure the issue of Virus infecting any SW RAID.  (long and hard thinking, then go wild)

The best advice will be, to think about service issues first, not cost of a silly $100 RAID card.  (the above cost me $40)  $100 gets a better card with WBC, above is just testing on my cheapest card in my spares kit.
Features like: (besides vast array types, even RAID 60.
Learn to ask how can I recover, given all classes of failures or usage modes? . (1 to 3 drives fail?, ) or migrating or expanding the array ?
  • Online Capacity Expansion (OCE)
    • Online RAID Level Migration (RLM) (or migrating drives to a new system or did so out of order, or to a newer controller and Resycn speeds, hours or is DAYs ok or drive expansion.(oops check that out)
    • Auto resume after loss of system power during array rebuild or reconstruction (RLM)
    • Single controller Multipathing
    • Load Balancing
    • Configurable stripe size up to 1 MB
    • Fast initialization for quick array setup
    • Check Consistency for background data integrity
    • SSD Support with SSD Guard™ technology
    • Patrol read for media scanning and repairing 
    • 64 logical drive support
    • DDF compliant Configuration on Disk (COD)
    • S.M.A.R.T. support
    • Global and dedicated Hot Spare with Revertible Hot Spare support
    – Automatic rebuild  (resync's that don't take DAYS to do.)
  • In a recent test, Nytro MegaRAID showed an almost 4:1 improvement in RAID rebuild timeover the same workload and configuration without a cache. (Imagine 6 hour job is now 24hours and can be far far worse, no cache)
    – Enclosure affinity
    – Emergency SATA hot spare for SAS arrays
  • more there are lots more features here if you do research on long term service support and (what happens if bla-bla fails? ) seems to me many folks fail to do this job.
  • HW raid cards 2012 can be moved or even changed from Linux to Windows, or reverse, and the array is good to go., (try that with SW RAID !)  The RAID card driver works on all Operating systems; for sure 99%)
  • Last and not the least, how does getting virus work for you , in your Software RAID, that was free and the major reason for using it.
One of my most favorite (2018) guides, not HP, or DELL is LSI SELECTOR GUIDE seen here,  very very good.
For those that say HW RAID is dead, think again,  ever try CACHECADE? $250 is not cheap but shows you how a SSD small and cheap can add huge gains. (Cachecade is  spring board to future SSD full arrays)

The Pissers and the Moaners : ( as seen endlessly on all social networks)
They cry their eyes out when the ARRAY goes dead. (wow , zero planning for what will happen, for sure!) no? , yes! (As they say Sheet happens)
  • So they kill power (or reboot) the server, while it rebuilding the array. wow. Then ask how can I know this fact? well did you lose your MSM ?  You run RAID blind? (worse yet MSM is free, why wait ,get it.)
  • Not backed up. (omg ,oh my golly why? not) If you were , you'd not be crying now, unless production is slow....
  • Not running RAID6 or 60
  • Using 5 year old HDD or older, really why? ( if one failed, at 5 years, why not expect #2 to fail NOW..)
  • Not using  WBC , write back cache or Cache vault. (then wordering why the rebuild , resync, is slow.) It will be. As will be production running slow. (for the same reasons)
  • Not using 15k drives that rebuild way faster. (never use SATA anything ,the bit-error rates suck on SATA)  15k means 15,000 rpm drives. ok?  ( read the datasheet now, this time more carefully, then build array's)
  • Not using real SAS drives, that last way longer, (not hype hard cold facts) (not any drive marked NEAR_anything) Near good? "eg. NearLine"
  • Not using 2.5" drive with smaller lighter (less mass) faster head arms that run vastly faster due to that fact of pure physics. (all above  3 lines to,lower rebuild times vastly , a FACT)
  • No  backup server raid to fail over to,,, opps plan for failures much, seems not at all.
  • Now if you were smarter, you'd build a new array, even in the same server , with new SAS 15k 2.5" Enterprise drives, build the array, copy that backup data here, and now run this array not the dead one.  (why make this so hard?)
  • Or rob a bank and build a new SSD array, of what ever floats your boat and  RAID class. (try R60) or use  Avago, Cachecade setup with 1 cheap SSD +raid. (magic no? study it)
  • And last I can't afford real hardware like, that, I say why try at all?, why build crap?, why not just use the CLOUD or use any of 10,000 web hosting services, or even use them as FTP, servers.

Alternate IDEAS on PSU's ( light duty here, all I care here is lots of air flow to make my SAS drive bundles of 4  run cool.)
On my $10 PSU 300w from HP, I removed all extra power wires and drilled extra mounting holes. ( do not bend the PSU, below is to make photos more clear)
This allows extra air flow less wires and a faster fan too if need be. (note the lack of heat sinks on  low powered PSU, it uses free air rules and air flow.

This is not mine, I used a HP PSU 300w the below is for illustrative purposes only.  DO NO EVER GO INSIDER HERE WITH THE POWER CABLE CONNECTEDF TO THE WALL POWER JACK.

The above comments do not apply to huge wattage or brand new PSU's, at all. just older ones and very small. (in fact a new PSU 300 has even less parts and a best buy for sure.)
Warning some junk cloned China, no name PSU's, use 20 year old designs with huge parts counts inside, that are pure junk in our modern better world.  (the run hotter to)

version 1.  5-12-2018 (success day)