BTRFS and ZFS file systems.

As I am about to put together some form of file server mostly for secure storage I have looked at 2 free nas set ups both of which use ZFS. The features in ZFS sound rather attractive. I then remembered noticing that 12.3 offers the facility to install BTRFS on everything. Looking around going on what I could find it offers the same sort of advantages as ZFS so is even attractive on a desktop machine. Info in some ways is scant so have a few questions that maybe some one can answer.

Is there a walk through on installing and using ZFS on OpenSuse?

I’m not entirely clear what facilities are currently available on OpenSuse BTRFS?

Are there any walk throughs on installing BTRFS in some or all of a current installation?

Both of these systems self heal. Makes sense but the mechanisms that are mentioned work at the event - when data is read. So I assume both have some method of checking the entire disc storage set up as an when required eg weekly, monthly etc etc as some of the data on them might not be read for a long long time. Maybe they continuously check? Not sure I like the sound of that as it might encourage disk wear.

Interested in any comments on this general subject but maybe not the absence of repair software. Trouble is given the mechanisms that these systems use it is doubtful if a breakage due to say a power outrage could ever be repaired - the other point is that this can not happen in practice due to the way the new data is written. Bugs are something else. I lost a disk once due to a mod to ReiserFS. Some one decided to check unused disc space for integrity and got it completely wrong on my scsi disk and trashed it.


You can install BTRFS just but selecting that format. But BTRFS is still considered in beta and I for one would not put it on a production system.

ZFS is not directly supported. But I believe there may be a kernel modual that may support it. I think the problem is the license that ZFS is under which does not mesh up with the GPL.

This is a sticky in this very forum, thus I guess you noticed it. But to be sure:

The problem is slightly different licensing. Some of these aspect have been discussed on here previously but one of the ZFS links given seems to be dead. BTRFS info is sketchy but I understand is in suse enterprise. Not sure how these things stand now but my 1st intro to Suse was paid for ex PC World, long time ago the only difference was support. lol! When I used it before installing I did get some pointers but was told that it only covered a machine with 1 hard drive, one floppy and one cd drive at the most. Haven’t run a machine like that since 386’s and compressed data partitions.

I had my tongue in cheek re power outrages. I have written journalled storage software for financial software a long time ago - there is always a point where if the power goes etc it doesn’t work out. My main interest is the added data protection these methods offer.

The more I look around on these techniques the more I wonder if ext4 is a better bet as at least that checks the journal so has a fair chance of detecting that it’s corrupt. That leaves me wondering if say Linux soft raid 5 has a utility to check and repair data that could be run periodically. That would get round the absence of “self healing” to some extent.


There is no magic bullet. unexpected power/hardware failure in any system can lead to corrupted data. Even RAID can leave things undone and in a unknown state. Best practice says in order of importance backup regularly, have a backup powersuply and a method to graceful shutdown, Level 2 or 10 RAID preferably real hardware not FAKE or Software.
That should protect data about as best as can be done unless a meteorite hits the machine :slight_smile:

It looks like ZFS failed to make it into 12.3 and is being worked on again in Factory. BTRFS only support raids 0,1,10 so not much interest. Also looks like BTRFS has some way to go to match ZFS but there may be more in it than I have managed to find.

I’ve used all sorts of raid for a long time. I much prefer a set up where a single disc failure can simply be replaced and rebuilt along with parity checks. I also believe that it’s important that people realise that all raid makes use of software be it partly or wholely in the main cpu. There is no real reason why the same checks can’t be built into so called fake raid as those used in the expensive end of the card market. The only difference is speed really.

Thanks I had read the sticky but it didn’t answer my questions.


Just in case some one else is interested in this area EXT4 seems to be the best option at the moment along with periodic mdadm scrubbing to detect and fix errors that would be healed in ZFS for instance. It seems scrubbing can be done when the disks are in use as it will back off if needed. Google brings up plenty of info on starting and stopping scrubbing.

mdadm doesn’t parity check on reads. Some raid cards don’t either and some add it as an option.


On 2013-07-05 23:56, John 82 wrote:
> Just in case some one else is interested in this area EXT4 seems to be
> the best option at the moment along with periodic mdadm scrubbing to

If you people are interested in btrfs, one of its developers is waiting
for comments on a thread at the beta forum here.

Cheers / Saludos,

Carlos E. R.
(from 12.3 x86_64 “Dartmouth” at Telcontar)