Filesystem Hierarchy Standard - Arrggghhh

I have not bothered to search the forum for similar posts - this is the soap box, and I want to stand on it for a minute!

First, I am fairly indifferent to operating systems. As long as I can achieve what I have to then I remain a happy teddy, if not - then not.

So to get to the point and or gripe. This is where I bite the bullet.

The most (absolute big time number one) incredibly annoying aspect of ‘Suse’ (all versions, since forever) has to be the way ‘they’ (by that I mean both the OS developers and the packagers) persist in virtually (or physically – but more of that later) exploding applications across the file system structure.

So, before you (the reader) rip me to pieces, I have to say that I am happy (and I mean really happy) with every other aspect of Suse – which is why I use it! In fact I use it on 11 fully operational server systems. Please read on.

So why the gripe? I often have to access Suse systems installed, configured and administered by other users in order to configure our JSP based applications on JBoss, Tomcat, etc. This is where it all goes pair shaped!

These server applications (JBoss, Tomcat, etc.) come as standard in a very concise, very usable, very navigable and very easy to configure file structure. Install them on Suse using a package and this perfectly sane structure is utterly destroyed – files are physically moved to various locations across the file system as defined (no, recommended) by FHS-2.3 (http://www.pathname.com/fhs/pub/fhs-2.3.html). In fact, to go further, Suse will do this to just about every package you ever try to install using the standard mechanisms.

It is utterly beyond me (and others I am sure) as to what this is supposed to achieve! Now hang on, to be fair, some packagers even go as far as to create this ‘sane’ application structure as an array of symbolic links in /usr/local or some other reasonable location. I am guessing this is to actually give the user an ‘impression’ of a very concise, very usable, very navigable and very easy to configure file structure. However, try accessing that ‘impression’ with something like WinSCP, Putty or similar and you will no doubt get as frustrated (er, I actually mean as angry!) as me.

Now here is ‘my’ (possibly naive) solution. On my machines I ‘always’ obtain the latest tar.gz. I then unpack it where ‘I’ want it. Now here’s the magic – I then create several symbolic links ‘too’ the application files ‘at’ locations ‘suggested’ by FHS-2.3. Accessing the application files remotely is a doddle, copying files remotely is a piece of cake, maintaining the code base is child’s play. Anyone else who accesses my systems can find files (links) at the ‘usual’ Suse locations, but they also quickly realize that they only need to look under ‘one’ directory to find ‘everything’ and not all over the file system!

I have been doing this for years with absolutely no ill effects. I have griped about it before. And I will continue to gripe about it until Suse realizes this is possibly ‘the’ number one reason Linux administrators opt for ‘other’ distributions that simply do not persist in ‘sticking to their guns’ on this matter. In fact the ‘others’ understand that virtualization or abstraction of an application is actually a hindrance to the wider acceptance of Linux based systems which is why they ceased to do it many years ago.

Now see me jump off the soap box. Thanks for your time.

I currently administer a few hundred SuSE servers (that’s real servers, not workstations) and I’m having no issues with the layout.

But then again, I know where to look at…

Hmm, did you read past the ‘Please read on’?

I guess I have to ‘bump my gums’ somewhat now? I personally run 11 servers (you know, like real servers), hosting several thousand instances of our applications. I have administered well over a 1000 servers (also real non-fairyland servers) running our applications at municipalities across Europe. I think I’ve maybe picked up one or two of the ‘common’ locations for files along the way.

I don’t have a problem? I have a gripe. There is a difference.

The gripe is - to put it more simply, why ‘explode’ then link when you can just link in the first place?

This ‘is’ the way other distributions have been working for a long time now due to that simple fact.

The virtual ‘locations’ are fine - but the physical ‘layout’ is abysmal!

Actually some of the “directories” in SUSE’s Java packages are really symlinks. SUSE actually did you the favour of preserving the old layout by using symlinks so you can still find things in the same place if you want, e.g. ~tomcat. There is a good reason for some of those relocations, e.g. logs should go into /var/log, editable config files should go into /etc, jars into /usr/share. I don’t have issues with the layout. Directory assignments are not meant to be set in stone by the upstream source, in fact they allow you to edit these properties.

Personally I think the *nix directory structure needs to be revamped for the 21st century.

The file structure was designed around the concept of multiple users running dumb terminals off a server, which still exists in some environments. That’s fine.

By why have /opt and /bin and then /sbin?

Does /media need to be seperate from /dev? Where does /mnt fit in there? Every device should be in /dev and temp mounted should either go in /media or /mnt, but we don’t need both.

Could /lib live under /bin?

/var/tmp should just be a part of /tmp

Most of /var could go into /usr or /home really.

/usr and /home in an interesting case, in that /home exists often as a seperate partition and is good for backing up data. But most of the stuff under /usr fits in other areas. For instance, binaries should be in /bin. /usr/bin is extraneous. /usr/local/bin is extraneous. I understand you may want multiple versions with different paths, but this isn’t the solution.

How about something crazy like?

/
/boot
/bin /bin/lib /bin/local /bin/user1 /bin/user1
/dev
/home /home/user1 /home/user1/mail /home/root
/share /share/logs /share/spool
/media
/tmp

Doesn’t that make more sense? GoboLinux - the alternative Linux distribution is also interesting to look at.

/opt is for third party packages, /bin for normal users and /sbin for system users. /opt is not a directory of executables so you are comparing apples and oranges.

Does /media need to be seperate from /dev? Where does /mnt fit in there? Every device should be in /dev and temp mounted should either go in /media or /mnt, but we don’t need both.

Apples and oranges again. /media contains mount mounts, /dev contains devices. I don’t see much use for /mnt, it seems to be a hangover.

Could /lib live under /bin?

No, /bin contains binaries, /lib contains libraries.

/var/tmp should just be a part of /tmp

Sometimes it’s the other way /tmp is sometimes a symlink to /var/tmp.

Most of /var could go into /usr or /home really.

Wrong. /var contains stuff specific to a machine, like log files and data files. /home contains user directories, which could be exported and shared.

/usr and /home in an interesting case, in that /home exists often as a seperate partition and is good for backing up data. But most of the stuff under /usr fits in other areas. For instance, binaries should be in /bin. /usr/bin is extraneous. /usr/local/bin is extraneous. I understand you may want multiple versions with different paths, but this isn’t the solution.

You forget /usr/share, /usr/lib and /usr/include, to name some.

/usr/local is for local site enhancements. There is some overlap with /opt, but there it is.

There is a small case for merging /bin and /usr/bin, however on some systems, you want a small /bin until you can mount /usr/bin, sometimes with a network mount.

It seems you haven’t read the rationales behind the FHS.

Just reducing the top level entries is no great simplification. And what’s the big deal anyway, the typical / contains around a dozen entries, and most of the time the user doesn’t see it.

However these are the things I have done in the past, to reduce the number of partitions.

Moved /usr/local to /home/local and made a compatibility symlink. Assessment: shrug, might make it easier to backup changes to a machine, but then I backup /usr/local anyway.

Moved /usr/local to /opt/local. Similar to above.

Moved /tftpboot to /var/tftpboot and later /srv/tftpboot and made a compatibility symlink. I believe this is the way it should be done, and in fact I think Kiwi-LTSP does this.

Moved /var/www to /home/www, or was it /usr/local/httpd to /home/httpd. This was in earlier versions of SUSE. Not needed now, since the creating of the /srv subtree.

In short, I think the current hierarchy, while not ideal, having evolved over may years, is not a big irritation for me. Just look at what a package contains and you will see much more complex directory trees.

If you are into alternate arrangements, I’m told the BSD ports system is another idea where packages are unpacked into one subtree and then symlinks made into that subtree. Sort of what the OP was looking for. Maybe he’d prefer BSD. :slight_smile:

enderandrew wrote:
> Personally I think the *nix directory structure needs to be revamped for
> the 21st century.
…]
> /
> /boot
> /bin /bin/lib /bin/local /bin/user1 /bin/user1
> /dev
> /home /home/user1 /home/user1/mail /home/root
> /share /share/logs /share/spool
> /media
> /tmp

“Those who don’t understand UNIX are condemned to reinvent it, poorly.”
– Henry Spencer

See http://en.wikipedia.org/wiki/Unix_philosophy for more smart quotes
about why things are the way they are.

Personally, the FHS is logical to me, I have no idea what other people’s
problems are with this scheme.
If you (or OP) need some documentation about the directory structure:
see hier(7).

/opt is unnecessary. Binaries can go in /bin just fine. The “third-party” thing is silly, in that back in the Unix days most of your binaries were part of the system, something you coded yourself (/usr/local/bin anyone?). On a modern Linux system, you could argue almost all the binaries are either part of the distro, or all third party.

Some people put KDE in /bin and some in /opt so that RPMs aren’t compatible when they could be otherwise.

/mnt and /media could easily be combined.

I’ve read various documents on the history and concepts of these folders, but they’re redundant.

And I didn’t say libraries go directly in /bin, but rather than /lib can be a subfolder of /bin, given that libraries are technically binaries.

/mnt is what /media is now per se’… What this is able to do is differentiate the ability to mount ext HDD, CD-burners, thumbdrives, cameras, etc to the /media dir and you could still use /mnt to mount network files systems. See, they did something right I think…

Why can’t we just call / C:\

Given that most new systems don’t have a single floppy drive, let alone two, do hard drives really need to start at c:\ anymore?

Now I’m getting all nostalgic for installing games from 7 floppies and singing the praises of Norton Commander on DOS 3.3

Not everything in /opt is a binary. Vendors usually give you lots of supporting files.

Some people put KDE in /bin and some in /opt so that RPMs aren’t compatible when they could be otherwise.

Now KDE is in the main directories. It used to be in /opt. Why would you want to mix RPMs from different distros, that’s a recipe for trouble. (Now that’s a more serious compatibility issue, but a different soapbox.)

And I didn’t say libraries go directly in /bin, but rather than /lib can be a subfolder of /bin, given that libraries are technically binaries.

Technically no, not everything in /lib is a library. There’s /lib/modules for example, which contains kernel modules, which are not libraries, no matter how you contort your definition.

And so what if you have managed to reduce the entries in / down by two or three. BIG DEAL.

If you really want to make a difference, propose something really radical. E.g. Mac OS/X renamed the directories /Library, /Library/WebServer/Documents, and so forth. Sort of what gobolinux does too. It’s a testament to Unix’s flexibility that you can rebuild the software to accommodate this.

/opt is like C:\Programs. It is for “‘big’ complete packages”, things like Firefox, OpenOffice, Schily Tool Suite, etc. Unfortunately most people just don’t get it and throw everything somewhere splitted up into /usr.

Actually the tomcat directory structure is somewhat problematic. You can see what the intention of the package maintainers were but if they change a working structure it should be done in such a way that it still works afterwards!
When using the suse tomcat and java packages you get the feeling that it’s less work to manage the dependencies yourself by just installing the binary packages from the apache/sun pages which in a way defeats the purpose of using a packaging system …