I’m new to Linux, and am trying to set up a video editing workflow. I have a new and capable PC build, am running Leap 42.2, and have an older RAID enclosure with 5 separate terrabyte drives in 5 bays (Firmtek SeriTek 5PM). The enclosure has a single cable in/output.
I am under the impression that I have two options: either to control and setup the RAID with software (although I may need a dedicated PCI card installed). Or, and that this is the better option, buying and installing a PCI RAID hardware enclosure. I’m looking for loose recommendations, and have the following questions:
-Will I be able to use Linux on a daily basis WITHOUT powering up and booting the RAID? I would very much like to have the option of powering up and booting the RAID only when needed and when the OS is up and running. My understanding is that some systems actually need the RAID powered up before the PC boots, and shut down after the PC shuts down. Is this correct?
-Specific recommendations are welcome but if they aren’t allowed by the forum policy I understand. I haven’t started researching this yet (still learning this new system), so links welcome.
-Rough idea of costs would be welcome. I figure my budget is around $3-400. I may need to try to setup software controller…
First off it all depends on the exact hardware there are no general rules. Maybe the enclosure has a built in RAID controller. Maybe not. ??? If so then the RAID array would just look like a single drive to the OS. No need to do anything special to use it.
Also again depending on the hardware you can set the mount to only mount when used so the RAID can be off at boot when turned on then first access will mount it. Again details are important
I had an interest in units like the Firmtec some years ago. My understanding is that they offer sata expansion rather than built in raid. I saw claims that Linux supports this but no specific details. If you happened to have the type of card that these units need fitting into the PC the kernel boot up log would show you if Linux supports it. I didn’t have this or the extension box so decided not to risk it. It may be possible to find out more on kernel related sites but I didn’t have any luck. In other areas knowing what chip is used can be a big help for searching for information.
In terms of having a unit plugged or not plugged I do suspect that is possible. Some part of the system does remember mounts and doesn’t care if they come and go. It came as a bit of a surprise to me via using disks in a usb dock via kde hot plug automounting. It assumes that what’s plugged in will be opened in an application offering a file manager or a photo app. Normally it would be used that way and then the pop up used to allows safe removal. If the icon used for that is clicked before opening with an application is used it functions as mount for something else and some part of the system remembers the mount. I expected it to allow safe removal as it usually does. I eventually removed these mounts using spacefm, an unusual file manager. It shows everything including other bootable systems on the machine and mounts those to /media.
In the end I fitted a nas instead. At the time I needed to make it available to windows users as well so set it up to run just cifs. It would run nfs as well concurrently but that slowed it down as the manufacturers hadn’t included much processing power in it. They all seem to use soft raid via a stripped down version of linux. The other reason I used cifs was the fact that long ago it was one method of running linux diskless pc’s connected to a server. It doesn’t need samba or anything else. I wanted to keep applications on the nas as well and be able to use them on data on the nas or my workstation. The cifs utility source file needed to be edited and re compiled to allow this but I understand the same thing can be done with NFS. Security concerns may have changed that though as it did with cifs. I had to pester novells support people to fix the problem with cifs. The changes needed may still be shown in the source file.
My recollection on the nas was that on opensuse 12.3 it didn’t care if the nas was available unless I logged into it.
The nas proved to be a bad idea. Disk life even with raid disks was appalling. I’d mostly put this down to inadequate cooling and lack of air filtering. Also from many years of pc use at work I really do wonder about the wisdom of not installing disks horizontally. Choose carefully if you buy a nas or even a sata expansion box. I do have a dog hair and dust problem though. One option may to be to build a pc and install freenas but I’m not sure what state that is in now. Another is a file server. I do have a small low powered HP server box but decided not to use it because of data rates over wifi. I may do if I switch to total flash drives in my pc or look at cloud storage but my main interest is redundancy. Currently I run a mirrored raid in my pc. 1tb but could be changed to 2 or 4.
Over the years I have run various raid set up including hard and soft. To be honest I don’t think that there is that much difference in practice even if moderately high end hardware raid controllers are used - not from a single user set up point of view anyway. The PC bus rates etc are more important.
If you do find out any further information about sata expansion boxes it would be worth adding it to this thread. There is very little info about on the subject.
A bit OT, but my experience was exactly the same with a cheap DLink 2-bay unit. One bay stopped working in 13 months (warranty was 12), it was not too loud initially but the two small coolers (more or less 5 or 6 cm diameter) soon started a annoying whining. It was a relief to get rid of it, now I have a full-fat 6-bay desktop working as a server, larger power draw notwithstanding.
No more NAS to me, it’s the same as with any other appliance: sooner or later the vendor will stop supporting the firmware, then you are in dinghy without a paddle. A desktop server has a much longer useful life and can be repaired and upgraded very easily.
(and I just noticed this may very well apply to Iot :/)
Recently I started to try disk expansion(no RAID) “docking stations” for my personal use and have been really pleased.
You can Google images, you’re looking for images where bare drives are inserted vertically.
You can’t beat that for air circulation and ventilation which have always been a big issue with any kind of enclosure, so removing the “enclosure” is a really interesting idea.
And, you can’t beat access, just reach over and pull a drive out and insert a replacement. The docking station detects when a drive is removed and automatically disconnects, then re-connects… So, you have “hot-pluggable” support.
In general, these devices are relatively simple internally so I hope and expect long, problem-free use.
As for going beyond mere expansion, I’ve had mixed results… My primary experience have been with inexpensive Mediasonic products. The models that support RAID employ their own hardware bridging device, so unlike simple expansion units don’t rely on the main PC for disk management. My main complaint about Mediasonics have been their drivers, which are only barely good enough to work and aren’t updated and improved. One big problem for me was that no matter if you configured RAID or not every disk had to be the same size(or waste unrecognized disk capacity). So, although you can get these for low prices, you won’t always get performance and reliability.
As for true NAS (Commercial products which are sold as NAS) for SmallMediumBusiness (not the really high ends like $24,000 NetAppliances),
I’ve never really had any complaints but in general I’ve always used relatively high end, eg Buffalo which cost much more than re-purposing a Linux box. I justify the price for these commercial units for their compactness, “ready to use off the shelf” and reputation.
When you buy the higher end like what I describe, it’s like buying a commercial Server instead of a white box, you’re paying for better components, better design and proven reliability.
The relatively few times I’ve deployed or used these, the NAS itself has always exceeded the lifetime of the disks, and I’ve never had the disks fail prematurely.
IMO you won’t likely get a truly reliable no-problem commercial NAS for less than about $600(list, 4 bays, no drives), probably $600-1000 list fully loaded.
I guess you get what you pay for at least this kind of device…