Giving my Optane something to do.

Heya Lizardheads,

Due to software restrictions I haven’t ran a Linux desktop in a few years, but this changed a while back and I can do what I want now.
Multi-boot is wasted on me so I’m giving my system a nice openSUSE makeover.

But I can’t decide what the ‘best’ use of my 32GB Optane NVMe module would be and I’m hitting a bit of a wall on reading technical information because of my dyslexia.
So I was hoping you guys could help me out a bit here.

The storage I have for the OS, software, and /home is a 1TB HDD, 2 SSDs (180GB + 120GB), and the Optane module (27GB in total) with 16GB DDR4.
As far as Linux is concerned the Optane is like any NVMe drive, but it reads, writes and erases in 4k blocks.
And only marks data as written when it’s actually been written.

The ideas I had are:

  1. Install the base system on it.
  2. As a bcache against my HDD.
  3. Swap.

Installing the base system on it would be pretty nice and everything on it will respond as if it’s permanently lubricated and set on fire.
But I would have to exclude more than just /home and swap from it since it’ll fill up quickly with installed software, Btrfs snapshots and whatnot.
And I don’t really have a clue on what I should mount to a different drive to keep it from constantly bursting at the seams.

bcache it against my HDD is the first thing I thought of.
But is caching against a couple hundred gigabytes really useful for the OS + software?
If I cache against 200GB or less, I might as well use an SSD.
Plus hibernation and all that stuff becomes difficult with bcache.

And the last one, swap.
It would make swapping amazing but do I need 27GB of the stuff?
And doesn’t that sound like a bit of a waste of the fastest storage I have?

Nowadays, the system is read from disk during boot but thereafter runs almost entirely in RAM so I doubt that best use would be for your system files.

Swap would be useful only if you run applications which will put pressure on your RAM.
Against your personal files (/home), if you don’t do a lot of active read/writes, it won’t have much effect.

So, I guess your fast storage is a waste…
Kidding!

If you do anything that involves heavy data processing where the data can’t all be stored in memory, your fast disk storage would be fantastic.
Caching implies reading the same data multiple times, if you run an application that does this <and> writes to disk, you’d see a big difference.

An usage I might look into is recent chess engines (World champions Stockfish and Lc0 are both available to openSUSE), they often are configured to do Tablebase lookups which are exhaustive analysis of endgame positions, an example is when there are only 6 pieces left for both sides. If a Tablebase lookup finds a certain result (win, lose or draw), then the players can agree to that result instead of playing the game out to the bitter end. Although main engines typically run entirely in RAM, special options like opening books and Tablebases are typically read from disk and both can be quite large.

But, running chess engines are mainly for chess wonks.
Unless you might be interested in simply getting some practical exposure to how a futuristic AI application is set up, its parts and how it runs (Lc0).

TSU

Scratch space, it’s kind of up there with using it for general swapping but it could be useful in some cases.
I took a look at Krita and it does use scratch space under the term ‘swap directory’, but I won’t be needing that much space for that.
Interestingly enough I’m actually at the start of learning basic machine learning and data analytics with R, but I don’t have a clue on how much temporary space that would require yet.

It’s just to bad I can’t put the base system, Xorg, KDE, and my browsers on the Optane and make zypper install the rest to another drive.
At least not without making a huge messy hack like symlinking a bunch of directories all over the place, or finding a way to reliably only install my non-core software with AppImages or something, is it?

I used the 27GiB Optane as system drive for a bit
The restrictions made me kind of nervours and I was constantly checking free space. (The wrong way apparently).
But it wasn’t too bad, compsize (a tool for Btrfs) was giving me ~14GiB to 18GiB used space, excluding swap, /home and /boot.
I did forgot about Btrfs snapshots and I found myself in a system that had a hard time booting because it had no free space, luckily it was easy to fix.

I also tried using the Optane as a caching device with an HDD as a backing device using bcache.
This worked pretty great, overall, except that applications kept freezing for a couple seconds, like when I right clicked somewhere, opened a menu, or changed tabs, and I constantly had to wait longer for larger programs like Firefox to respond.
My guess at the time was that the system was waiting for the HDD to spin up and/or find certain information on it.
I heard my HDD spinning up sometimes when this happened, but I was getting a little too annoyed and didn’t go into any research about what the actual culprit was.

So I’m running my system on a regular SATA SSD for the time being, and the Optane is on swap duty for now.
Which never gets used so far.

I could technically try to bcache a SATA SSD with the Optane, but that’ll give me speed increases I don’t really need, at best.

With a small amount of disk space it is better to use ext4 or xfs instead of btrfs or zfs.
It is possible to distribute system folders along more than one drive, so you may put some files and folders to Optane, and the rest - to other disks.

True, you could re-mount some of the highest level directories in your root partition (eg /srv/ /opt/) to a different drive, depending on your objectives. The directories I listed are normally used for User mode apps… ie applications that are installed and not part of your core system. You can reverse my recommendation if you intend instead to re-deploy parts of your core system. Although re-mounting can be done manually, your YaST Partitioner module should be able to do this, too.

I’d have to think further about whether there is any real difference to using ext4 or xfs over BTRFS for this purpose…Both deploy within a partition similarly, but BTRFS deploys a “volume” layer to manage structure where ext4 and xfs won’t by default… unless you also deploy LVM. My initial thought is that it won’t likely make any difference in read/writes and I/O.

But, consider what I said in my first post… Nowadays, the disk is read heavily on boot, but not often thereafter by a modern running system… While a User application <might> read/write the disk heavily depending on what the app is and does. It’s ironic that disk capacity has gotten so cheap but at the same time ways have been found to minimize the effects of disk I/O to improve system performance.

TSU