run fsck on root

I found two methods from an internet search. One is to use recovery mode as a boot option. Well I don’t have that option in my grub2-efi. The other is “touch /forcefsck”. Easy enough, except when I boot it doesn’t run fsck. It does delete the forcefsck file.

The reason I want to run fsck is Clonezilla produces an error when it tries to clone the root partition. This wasn’t always the case, i.e. I have cloned the hard drive using the same hardware without issue. The system is a Lenovo T495, which needs to run a late rev of the kernel, so I am reluctant to use the fsck built into clonezilla. The error message is “bitmap free count error, partclone get free:19900642 but extfs get 19900750”. This looks like the kind of thing fsck could clean up.

Linux 5.4.0-rc1-1.g6fce476-default #1 SMP Wed Oct 2 05:04:04 UTC 2019 (6fce476) x86_64 x86_64 x86_64 GNU/Linux

I suspect ultimately with this hardware being so new that I will have to run tumbleweed, but I want to do a hard drive image first.

The sure way is to boot from live media, and run “fsck” from there.

It is just possible, however, that kernel “5.4.0-rc1-1.g6fce476-default” is causing this problem.

I just tried the »touch« method with my openSUSE Leap 15.0 installation on my standard ext4 root partition:

sudo touch /forcefsck

After reboot, the /forcefsck file has vanished, and my Journal contains the following:


rig:~ ▶ journalctl -b --no-hostname --output=short-precise | grep fsck
Nov 05 14:52:19.037126 systemd-fsck[321]: RIG: clean, 617671/15269888 files, 36373170/61049344 blocks
rig:~ ▶ _

According to systemd-analyze, the check took a whopping 11 milliseconds. :slight_smile:

rig:~ ▶ systemd-analyze blame
           106ms systemd-journal-flush.service
           103ms display-manager.service
            94ms upower.service
            86ms udisks2.service
            78ms systemd-udevd.service
            59ms polkit.service
            59ms systemd-logind.service
            46ms systemd-user-sessions.service
            44ms klog.service
            30ms systemd-udev-trigger.service
            24ms systemd-update-utmp.service
            23ms systemd-tmpfiles-setup.service
            21ms user@1000.service
            19ms systemd-networkd.service
            17ms systemd-remount-fs.service
            16ms dev-hugepages.mount
            16ms systemd-tmpfiles-setup-dev.service
            14ms systemd-modules-load.service
            13ms dev-mqueue.mount
            13ms sys-kernel-debug.mount
            11ms systemd-journald.service
            **11ms systemd-fsck-root.service**
            10ms systemd-sysctl.service
             6ms systemd-random-seed.service
             5ms kmod-static-nodes.service
             4ms systemd-update-utmp-runlevel.service
             3ms dracut-shutdown.service
             2ms rtkit-daemon.service
             1ms systemd-vconsole-setup.service
rig:~ ▶ _

Cheers!

I don’t see that the root partition’s filesystem has been described.

Remember,
If you’re running BTRFS, there are other things you should do before running a fsck, and if you must then run the BTRFS version of fsck, not the old fsck command that applies on every other file system.

https://btrfs.wiki.kernel.org/index.php/Btrfsck

TSU

Thanks for the tip on systemd-analyze. I did a fresh forcefsck


systemd-analyze blame | grep fsck
          1.493s systemd-fsck@dev-disk-by\x2duuid-092af4f3\x2d2a54\x2d4de4\x2d851c\x2d706a6e635897.service
           163ms systemd-fsck-root.service

I swear I saw the first fsck on shutdown (really reboot). Doing some grepping, I found these fsck messages. Note the times since only the last two represent the current state of the notebook.


cat messages | grep fsck
2019-11-04T00:02:05.081972-08:00 linux-yxjo systemd-fsck[744]: /dev/nvme0n1p6: clean, 68876/50331648 files, 12726830/201326592 blocks
2019-11-04T00:12:58.247216-08:00 linux-yxjo systemd-fsck[726]: Please pass 'fsck.mode=force' on the kernel command line rather than creating /forcefsck on the root file system.
2019-11-04T00:12:58.247290-08:00 linux-yxjo systemd-fsck[726]: /dev/nvme0n1p6: 68901/50331648 files (3.4% non-contiguous), 12726711/201326592 blocks
2019-11-04T00:17:26.151305-08:00 linux-yxjo systemd-fsck[751]: Please pass 'fsck.mode=force' on the kernel command line rather than creating /forcefsck on the root file system.
2019-11-04T00:17:26.151444-08:00 linux-yxjo systemd-fsck[751]: /dev/nvme0n1p6: 68911/50331648 files (3.4% non-contiguous), 12726710/201326592 blocks
2019-11-04T00:26:39.419289-08:00 linux-yxjo kernel:   560.864896] FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
2019-11-04T00:32:49.799285-08:00 linux-yxjo kernel:   931.242398] FAT-fs (sdb1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
2019-11-04T00:37:32.787298-08:00 linux-yxjo kernel:  1214.233419] FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
2019-11-04T00:47:19.285263-08:00 linux-yxjo systemd-fsck[791]: /dev/nvme0n1p6: clean, 69009/50331648 files, 12726919/201326592 blocks
2019-11-04T15:39:50.198001-08:00 linux-yxjo systemd-fsck[853]: /dev/nvme0n1p6: clean, 69009/50331648 files, 12726917/201326592 blocks
2019-11-04T18:38:08.857913-08:00 linux-yxjo systemd-fsck[724]: /dev/nvme0n1p6: clean, 69248/50331648 files, 12709174/201326592 blocks
2019-11-05T00:50:50.106216-08:00 linux-yxjo systemd-fsck[747]: /dev/nvme0n1p6: clean, 69086/50331648 files, 12716873/201326592 blocks
2019-11-05T02:12:44.767822-08:00 linux-yxjo systemd-fsck[769]: /dev/nvme0n1p6: clean, 68010/50331648 files, 12718325/201326592 blocks
2019-11-05T18:19:11.925845-08:00 linux-yxjo systemd-fsck[666]: /dev/nvme0n1p6: clean, 67962/50331648 files, 12717916/201326592 blocks
2019-11-05T18:50:55.833732-08:00 linux-yxjo systemd-fsck[821]: Please pass 'fsck.mode=force' on the kernel command line rather than creating /forcefsck on the root file system.
2019-11-05T18:50:55.833843-08:00 linux-yxjo systemd-fsck[821]: /dev/nvme0n1p6: 67759/50331648 files (3.3% non-contiguous), 12717820/201326592 blocks

I had no external drives plugged in, so those sda and sdb must be from some thumb drives. I will try clonezilla again and see if the fsck cleaned up the problem.

Not mentioned previously, but the file systems are ext4.

For completeness, here is the start of the bootlog:


------------ Tue Nov 05 18:50:55 PST 2019 ------------
  OK  ] Started Show Plymouth Boot Screen.
  OK  ] Started Apply Kernel Variables.
  OK  ] Started Forward Password Requests to Plymouth Directory Watch.
  OK  ] Reached target Paths.
  OK  ] Found device /dev/disk/by-uuid/611b9e33-764b-4935-b0d6-1e2da8df74eb.
  OK  ] Found device /dev/disk/by-uuid/ebaab11a-7a1a-43b6-b860-c4409ee032fd.
  OK  ] Reached target Initrd Root Device.
         Starting Resume from hibernation using device /dev/disk/by-uuid/611b9e33-764b-4935-b0d6-1e2da8df74eb...
  OK  ] Started Resume from hibernation using device /dev/disk/by-uuid/611b9e33-764b-4935-b0d6-1e2da8df74eb.
  OK  ] Reached target Local File Systems (Pre).
         Starting File System Check on /dev/disk/by-uuid/ebaab11a-7a1a-43b6-b860-c4409ee032fd...
  OK  ] Reached target Local File Systems.
         Starting Create Volatile Files and Directories...
  OK  ] Started Create Volatile Files and Directories.
  OK  ] Reached target System Initialization.
  OK  ] Reached target Basic System.
  OK  ] Started File System Check on /dev/disk/by-uuid/ebaab11a-7a1a-43b6-b860-c4409ee032fd.
         Mounting /sysroot...
  OK  ] Mounted /sysroot.
  OK  ] Reached target Initrd Root File System.
         Starting Reload Configuration from the Real Root...
  OK  ] Started Reload Configuration from the Real Root.
  OK  ] Reached target Initrd File Systems.
  OK  ] Reached target Initrd Default Target.

Clonezilla worked. forcefsck looks like the easiest means to fsck root.

Conclusion: Well don’t jump to conclusions. Just because I didn’t see fsck run doesn’t mean it didn’t run. Hence pilot error and lesson learned. Thanks all for the help.