Partitions on SSD and HDD

Hi,

I’m currently planing my new desktop computer. It will have a 250 GiB SSD and a 1 TiB HDD.
I want to install a Windows 7 Pro plus openSuSE 13.1 (or 13.2) dual boot (no UEFI boot). Both systems should be installed on the SSD.

My plan for Windows is to create a 100-120 GiB C: partition on the SSD and a 300-400 GiB D: partition on the HDD.

But even after reading lots of threads (here and on other places of the internet), I did not find the final conclusion for the linux partitions.

So far, my current ideas/plans are:

  • Place the swap partition on the SSD. Due to 16 GiB memory it should rarely be used (if at all).
  • Use the rest of the SSD for linux /. No separate /boot partition.
  • Use the rest of the HDD for linux /home.

I found different “hints” on different places regarding linux and SSDs.

One was to move /tmp and /var from the SDD to the HDD to reduce the number of writes to the SSD (been it one partition for each, or a single partition with mound bind’s).
On the other side, there are posts coming to the result that this should no longer be required on modern SSDs (like the Samsung 840 EVO series).
Any comments?

An other hint to improve the overall performance is to use a partition on the SSD for /home. Larger directories are sym-linked out to a partition on the HDD. The benefit would be the larger speed for the numbers of smaller (config) files in the user home directories.
On the other side, this setup will require to manually move any directory that might become larger …

I was thinking of a reverse setup of the above.
Use a HDD partition for /home. Reduce the / partition to create an additional data partition on the SSD.
Then move and sym-link smaller, often read directories (like ~/.config) to that data partition on the SSD.
That way, I am not forced to thing about the setup for each new directory in a user’s home directory. But can (manually) move out small directories that will benefit from the faster speed of the SSD. The downside is that this can not be used to place the small config files in the users home directory (e.g. .bashrc) to the SSD.
What do you thing on that idea?
It it worth the setup overhead or will it not bring any significant performance improvement?

Thanks in advance,
Holger

I’ve been using /home on SSD for ages now, and only recently a 7 year old SSD died, but not due to the max. # of writes. What I have done in the past is keeping /home on the SSD, but moving ~/Music ~/Video ~/Pictures etc to HDD to save space. To be honest: I’ve stopped worrying about read- and writecycles on SSD where it concerns home computers and laptops.

Depends on what file system you plan to use or if LVM containers if you need a separate boot. If you stick to ext4 then no need. Note that 13.2 which is due out in about a week will default to BTRFS file system instead of ext4. So beware of the fact if you install 13.2. If you do use BTRFS then you need to allow a bit more space because it has a thing called snapshots that eat up the partition doing periodic backups of the files. So you need to allow about +50% above what you think you will need. Or turn snapshots off. It is on by default.

New SSD’s seem to have pretty good lives measured in hundreds of terabytes transfer. What kills them is small writes like setting the data on each file open… See here

http://techreport.com/review/24841/introducing-the-ssd-endurance-experiment/5

But this test did large file transfers not small pecking at the files. So follow the recommendation and use noatime option at least

On Sun 26 Oct 2014 04:16:02 PM CDT, gogalthorp wrote:

Depends on what file system you plan to use or if LVM containers if you
need a separate boot. If you stick to ext4 then no need. Note that 13.2
which is due out in about a week will default to BTRFS file system
instead of ext4. So beware of the fact if you install 13.2. If you do
use BTRFS then you need to allow a bit more space because it has a thing
called snapshots that eat up the partition doing periodic backups of the
files. So you need to allow about +50% above what you think you will
need. Or turn snapshots off. It is on by default.

Hi
I refute that claim… :wink: It’s the snapper tool, not the file
system…

I have a 40GB btrfs / with 13.1 using 50% of the drive allocated to
btrfs. You just need to ensure that you tweak the snapper (note this has
nothing to do with btrfs file system!) config in 13.1 or if not wanting
them turn snapper off…

I suspect (going by my SLE 12 installs) that timeline snapshots are off
by default in 13.2(?) which are the ones that chew up space.

As for SSD’s my OCZ on this system is rated at 20GB writes per day, I
run at about 6.5GB’s a day, the notebook with a Crucial
SSD about 2.5GB’s a day. Don’t even worry about it… I do run the
elevator at noop, but mixing SSD+Rotating there are probably some
benefits to tweaking the i/o scheduler.


Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
openSUSE 13.1 (Bottle) (x86_64) GNOME 3.10.1 Kernel 3.11.10-21-desktop
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

I’ve read that btrfs is the new default file system. But my plan is to keep using ext4 for the new machine.

Well snapper and BTRFS seem to be wedded at the hip. Can I use it in ext4??

According to Snapper, The ultimate Snapshot Tool for Linux you can:

Works with btrfs, ext4 and thin-provisioned LVM volumes

But I cannot tell you more.

On Sun 26 Oct 2014 05:06:02 PM CDT, gogalthorp wrote:

Well snapper and BTRFS seem to be wedded at the hip. Can I use it in
ext4??

Hi
Yes, but it’s experimental…
http://snapper.io/faq.html


Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
openSUSE 13.1 (Bottle) (x86_64) GNOME 3.10.1 Kernel 3.11.10-21-desktop
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

On 2014-10-26 16:56, holgerschlegel wrote:
>
> Hi,
>
> I’m currently planing my new desktop computer. It will have a 250 GiB
> SSD and a 1 TiB HDD.
> I want to install a Windows 7 Pro plus openSuSE 13.1 (or 13.2) dual boot
> (no UEFI boot). Both systems should be installed on the SSD.
>
> My plan for Windows is to create a 100-120 GiB C: partition on the SSD
> and a 300-400 GiB D: partition on the HDD.

I’m considering bcache instead. But I want to know if someone has used
it with openSUSE and written about it.

http://en.wikipedia.org/wiki/Bcache


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

On 2014-10-26 18:06, wolfi323 wrote:
>
> gogalthorp;2671333 Wrote:
>> Well snapper and BTRFS seem to be wedded at the hip. Can I use it in
>> ext4??
> According to http://snapper.io/overview.html you can:

Wow. :open_mouth:


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

So, regarding to this, the conclusion to

is: just keep /tmp and /var on the ssd? Will there be a large performance benefit if parts of my home directory are placed on the SDD (versa placing it on the HDD)? For example, sub directories like .config. To remain the control of what is stored on the SSD and what is stored on the HDD, I do not like the idea of placing the complete /home directory on the SDD and just sym-linking out some larger directories… Holger

Probably not human noticeable. The key to SSD life extension is to keep often written data away. Reading data is not an issue it is the write/erase cycle that limits the life. Ever time you write even one byte a block of memory must be moved to a staging area and the whole block must be re written at a new location taken from a free memory area. At a later time the SSD will erase the block in the staging area and put them to free. This cycle is the write/erase cycle in a SSD Flash memory has a limited number of these cycles that it can do. When the memory tests bad (ie too many write/erases) it is moved to a bad area and not used. To extend the life extra memory is provided as a pool to replace bad blocks also special algorithms do write leveling so as to spread the cycles across as much of the drive as possible thus increasing the over all life. If you do nothing the life of a drive may be only a few years. taking basic care it can be 6-7. taking extraordinary care and adding a spinning rust drive to hold frequent changed data it can be much much longer. Note that most tests I’ve seen seem to the moving of large amounts of data. But in real life it is lots and lots of little writes that happen.