HDD Storage Size Discrepancy

I recently upgraded from a 1 TB HDD to 10 TB HDD. I did an rsync to copy over /home.

% fdisk -l

Disk /dev/sda: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Disk model: WDC WD102FZBX-00
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD1003FZEX-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

du -h both reports 845 GB used for /home on both drives, but df -h reports 845 GB Used /home for the 1 TB drive, but 1020 GB Used /home for the 10 TB drive.

Why is there a 175 GB discrepancy?

tl;dr - they don’t count the same thing

https://www.cyberciti.biz/tips/freebsd-why-command-df-and-du-reports-different-output.html

BTW ignore the FreeBSD part, the reasons are the same.

And then on top of that you moved from one file system to another, files are not going to be packed exactly the same way. You also didn’t say if they’re even the same file system.

1 Like

Without knowing both filesystem types, the filesystems parameters (how they were created) and options used during rsync invocation all you can expect is wild, wild guesses.

And in any case, this question has absolutely nothing to do with Hardware.

1 Like

Both filesystems are XFS, created using the Leap installers, with the 1 TB done a few years ago (don’t recall Leap version), and the 10 TB done with Leap 15.6 installer.

I did edit my initial post with the rsync flags, which gave an error on save, but when I checked immediately afterward, the edit was there. (In editing this message, it seems using “—” to make a line causes the editor to think the post is being edited in another tab. This may be the cause of the edit save error yesterday.)

Here are the rsync flags I used:

-vrlptgo

-v   verbose output
-r   recurse into directories
-l   copy symlinks as symlinks
-p   preserve permissions
-t   preserve times
-g   preserve group
-o   preserve owner

I did not use -H, which I later read had something to do with hardlinks.

Please move this to the appropriate forum section.

I am not intimately familiar with XFS, but Internet suggests that overhead may be very different (several orders of magnitude) depending on the features enabled during filesystem creation. It is quite possible that the newer Leap release enabled more (different) features than the older one.

I am more surprised that du shows the same size in both cases, as these options should cause hole expansion. Maybe you do not have many (or large) files that are affected. I have several VM images that consume less than half of their nominal size (many gigabytes).

Yes, it will result in multiple copies, but for this you had to use hadlinks in the first place. As this is /home, I presume you know whether you used them or not.

Without knowing the Leap version I started with, with the 1 TB drive, I guess it’ll be an unknown.

I guess I don’t have many large files. No VM’s.

Yeah, I doubt I’ve used hardlinks in /home.

Thanks for answering!