System partition space gone, but where?

After upgrading to Leap 15 I noticed that my system partition was filling up. Today I had many warnings about low space. Space allocated is 43 GB, the max. Reported used space is 9.3 GB. Somewhere I have lost 30 GB? So I pruned out the /tmp directory and removed a number of largeish files, (MB, not GB) but made little progress. Also removed snapshots. Booted with a different system and ran btrfs check --repair on the partition, all clean. File count shows that some directories are unreadable. Not sure what this means. Any suggestions where this 30 GB can be accounted for? I’m stumped.

Please show some facts, like

df -h

> df -h /dev/sdb3 Filesystem Size Used Avail Use% Mounted on
/dev/sdb3 41G 37G 3.2G 93% /run/media/colin/e4407aee-9d33-4bad-a972-7…
I used the disk space analyzer on the ‘/’ root partition and it reports 9.3 GB used.

Please, please. Use the CODE tags around copied/pasted computer text. It is the # button in the tool bar of the post editor. Now it is far from readable and not understandable.

How did you upgrade? If live then maybe snapper is using the space.

Retrying…

> df -h /dev/sdb3 
Filesystem      Size  Used Avail Use% Mounted on 
/dev/sdb3        41G   37G  3.2G  93% /run/media/colin/e4407aee-9d33-4bad-a972-7...  

Upgrade was with a DVD using the upgrade option. All I could see were the options of completely wiping the disk with Installation or letting upgrade just move me to a set of different repositories. My usual choice of wiping the system partition clean and leaving data intact did not appear to be available.
Perhaps I am mistaken.

How can I tell if snapper is using space?

There ia some misunderstanding here. I asked for

df -h

We now have some information on a file system that is mounted for user colin as a dynamicaly added device. So please provide the things people ask for, or explain while you will do differnt (or not at all). Just doing different without any explanation is a bit frustrating will not encourage people to help you.

And when you choose the Upgrade from the main DVD installation menu, it will not later offer you to recreate the / file system, because that would require a complete installation, not an upgrade.

The reason for the dynamic reference is that because the space on my original drive was running critically low I purchased a new disk drive and installed Leap 15 and now have my system booting from a clean drive with no worries about system having no space to work with. The original drive is now secondary, allowing me to run an integrity check on an unmounted partition. As I reported, the check runs clear. To give you data on my current boot drive I don’t think would be relevant to why I am getting different reports from the system partition on the original drive. I hope this is a satisfactory explanation.

The complete installation option on the old drive insisted on formatting the entire drive, which was exactly not what was required since it, apparently, wanted to format my data partition. The sweet spot which would leave data intact but format the system partition did not seem to be available. From my reading this could be because the whole focus is on upgrade building on a previous snapshot which would be destroyed on formatting.

It is. But please take into account that we can not look over your shoulder, thus we depend completely on what you show and explain. And also do not forget that we have to deal with all levels of Linux knowledge and experience without the others knowing what those levels are. Thus we are always very suspicious about what is told and even on what is shown.

I did not use the ISO to upgrade but the online method (changing the repos and zypper dup), thus I can not give first hand information about using the 15.0 DVD, but from the forums here I understand that there is a new partitioner there and there are posted some problems about not being able to let things go as wanted. Thus you might have run in such a case.

Back to what you showed.
/dev/sdb3 has a file system that is 93% full. When you explain that that is a btrfs file system that you normally use mounted on /, then that is to much and also interpreted as absolute 37Gb that is to much. The default 40G for a btrfs root (with separate /home and other data partitions) should be OK.

Now about the suggestions of there being to many snapshots, that is not the case here, because snapshots are outside what df reports. Thus you really have to search for files that should not be there or grow out of size (log files?).

This is not trivial. One mostly goes to the root of such a file system (with cd) and then uses

du -sh *

When one of the directories there seems to show suspicious high usage, cd into it and repeat the du -sh *. And so on. Do not hesitate to ask here for confirmation of what you conclude before you remove anything. And do not forget that only removing is not enough. One has top know why something has run out of hand to avoid the problem coming back in the future.

Thanks. Here is the output from du, which is the same as the output from the Gnome disk usage utility:

# du -sh
9.6G    .

So the partition is showing almost full from one angle, and only 9 of 40 from the other. I guess I am looking for the can opener that will show where these invisible files are and what they are up to. It could be snapshots:

# du -sh .snapshots/
0    .snapshots/

I know there are snapshots there, so space used of 0 is not helpful, the snapshots were quite visible using snapper when I was booting from that drive.
From more reading it seems that btrfs “reserves” blocks but not in file format so they can’t be reported as files.
Now that I have a clean clear boot drive as backup I think the solution is to boot from the old drive and use snapper to delete many more snapshots.
If there were a way to manage snapshots on a non-boot partition from my booted drive that would be helpful.

Nope, that doesn’t work properly with btrfs. On btrfs


btrfs-filesystem df -h

should be used.

EDIT: same goes for the ‘du’ command, see

man btrfs-filesystem

Again, why do you do something different then I suggest without any explanation why you think your version is better???

I said: cd to the directory. When /dev/sdb3 is still mounted as in your earlier post that would be

cd /run/media/colin/e4407aee-9d33-4bad-a972-7...


I see nothing of the kind!

Then I suggested:

du -sh *

and that is not what you did.

I do not know what to say without the change that I hurt the T&C of these forums. :frowning: