Anyone else stuffed to gillz? Should there be a cron job? Nothing in weekly/daily/monthly.
Had to restore from monthly Dec backup then reinstall 900 odd updates, should usually be 11GB free with 40GB root partition. What is proper procedure if root gets full?
First step, delete snapshots you do not need; see them via ‘snapper’:
> cd /
> sudo snapper list
The system should auto-clean some of those, but you also have a pretty
small overall disk size so I’d make that bigger, or else disable snapshots
entirely if you do not want them. Snapshots are meant to let you go back
to a previous point in time, but to do that they must be able to hold
files that you have deleted, so it’s more likely to fill a disk that you
think, or are told via ‘df’, is not yet full.
–
Good luck.
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below.
If you want to send me a private message, please let me know in the
forum as I do not use the web interface often.
As far as I know this is still true. A long time back I was in a similar position and as long as your /home partition is not to large you could copy it to another drive or to the cloud, then delete the partition, expand the root partition (personally I would consider going to 100GB), then create a new home partition (at this step I went with ext4 since it can be resized) and copy the files back.
This is more simply done if you have a second drive that you can copy /home to and then edit fstab to point to the copy until you are done.
The cron jobs which used to clean Btrfs partitions have been moved to systemd timers …
Please check the systemd Btrfs services: “systemctl list-unit-files | grep -i ‘btrfs’”
Only the following Btrfs services should be marked as being “static”: btrfs-balance.service, btrfs-defrag.service, btrfs-scrub.service, btrfs-trim.service.
The rest should be “enabled” …
Please be aware that, at least on this Leap 15.0 system, the Btrfs Balance and Scrub timers are set to “monthly” …
You can override the timers with “systemctl start «Btrfs service»” …
You may have to execute the following with the user “root” in systemd “Rescue” mode:
On 01/19/2019 05:36 AM, doscott wrote:
>
> fleamour;2891916 Wrote:
>> The root partition is 40GB default /home is on the same 500GB SSD but
>> XFS cannot be shrunk right?
>
> This is more simply done if you have a second drive that you can copy
> /home to and then edit fstab to point to the copy until you are done.
These days, unless you have actually filled that /home space with data, it
is likely you could fit everything there onto a big USB stick temporarily.
–
Good luck.
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below.
If you want to send me a private message, please let me know in the
forum as I do not use the web interface often.
I was not notified of your post. I discovered as much with my ramblings on Google. I set the default snapshots to less as per these forums (sudo nano /etc/snapper/configs/root) then ran;
sudo sh /usr/share/btrfsmaintenance/btrfs-balance.sh
sudo sh /usr/share/btrfsmaintenance/btrfs-scrub.sh
sudo sh /usr/share/btrfsmaintenance/btrfs-trim.sh
This has made no difference as of yet.
Recommended root is now 50GB vs my 43 GB — 9.3 GB free (78.5% full). Basically if / run outa space, cannot even rollback, screwed & need use monthly Clonezilla image to restore!
sudo zypper in -f btrfsmaintenance
has worked in the past, but by keeping an eye on disk space I am seeing free space decrease & volume creep up every zypper dup. I would resize /home as per @doscott & @ab suggest, but do I use Dolphin on a live system? dd offline? Or some other method?!? Please elaborate…
Hi
Looks like you never ran the snapper maintenance to cleanup (recover space) after your config changes to remove old snapshots…, two different tools to run, check down in the cron directories for the snapper ones to manually run…
From a command shell
sudo cp -ra /home/your_home /destination/your_home
should do it.
Have you used the Yast snapshots tool to delete old snapshots? My experience is that if there is an error in deleting an old snapshot the automatic timeline cleanup quits which eventually causes space issues. Try deleting the oldest timeline snapshot and see if that is successful.
A rollback doesn’t require a lot of space as the snapshots are not full copies of the system.
If you execute “ systemctl list-unit-files | grep -i ‘snapper’ ” you’ll notice 3 systemd “timer” services: “boot”, “cleanup” and “timeline” — if they aren’t enabled, enable them …
Leave the “static” systemd units as they are – they’ll be automagically executed as needs be …
“systemctl start snapper-cleanup.service” will manually execute snapper’s housekeeping …
Can I delete /home, repartition & copy back across from 2ndary HDD on a live LVM system?!? I know certain tasks like resizing swap or root /home can all be carried out without a seperate GParted offline session. However, what I am suggesting sounds kamikaze & may well be the stupidest suggestion ever & eligible for a Darwin Award no vote of confidence FAIL?!?
I’ve never dealt with LVM, so wait for expert advice on doing this live. At some point you will be resizing the btrfs partition and I don’t think you will want to do that live.
I see the sequence being:
1 back up the entire /home
2 boot from USB stick or cd with portioning software on it.
3 delete /home partition
4 expand btrfs partition
5 create new partition for /home using ext4 or xfs
6 mount /home and backup location
7 copy backup to /home
8 unmount items in 6
9 get uuid of /home partition
10 mount btrfs partition
11 edit /etc/fstab of btrfs partition (not the one of the USB/CD) and change uuid of /home
12 unmount btrfs partition
13 cross fingers and reboot
please wait for others to comment on what I may have missed
If the filesystem is mounted, it can be used to expand
the size of the mounted filesystem, assuming the kernel and the file
system supports on-line resizing. (Modern Linux 2.6 kernels will
support on-line resize for file systems mounted using ext3 and ext4;
ext3 file systems will require the use of file systems with the
resize_inode feature enabled.)
A guide for LVM with xfs:
You should be able to resize a live btrfs file system without LVM though I never did this myself. Here’s a guide:
I currently have a 220 GB root btrfs partion and have my /home and /var/lib/docker mounted in other partitions. At the moment I have 55% of the space used, containing 12 important snapshots, 4 unimportant snapshots; I don’t use timeline snapshots.
This is my root.conf
ALLOW_USERS=""
ALLOW_GROUPS=""
# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"
# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"
# run daily number cleanup
NUMBER_CLEANUP="yes"
# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="4-20"
NUMBER_LIMIT_IMPORTANT="10-20"
# create hourly snapshots
TIMELINE_CREATE="no"
# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="no"
# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="10"
TIMELINE_LIMIT_DAILY="10"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="10"
TIMELINE_LIMIT_YEARLY="10"
# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"
# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"
With approx. 20% of the space available you might try
The non-important snapshots normally don’t take much space. I’ve noticed lately that there are fewer updates per week, but larger. You can use the yast Filesystem Snapshots utility to keep an eye on things for a while. I also use logwatch and the daily report gives a section that looks like: