For ages now I have been hibernating my desktop system rather than re-booting each day but since I updated it at the weekend it now fails to restore from hibernation but re-boots anyway. Obviously I rebooted after the update. Now I have made no changes to the system or grub2 settings myself, only the last very large set of updates done on the 28th May. I have had a look at the journal but nothing jumps out at me as to why the restore from hibernation fails or perhaps does simply not happen, although I do see an initial message about loading ram disk before it then starts to show the boot up messages. I have checked both the efi setup and grub2 settings but nothing seems to have changed.
Has anyone any suggestions as to how to find out why it is not working as it should please?
Just did a test swapping the efi boot options. I have two options set up, one for shim.efi and one for grubx64.efi, the latter allows me to hibernate and restore but if I use shim it seems to hibernate but not restore. However prior to these recent updates I was still using the same efi boot option number although the updates could have changed the content, as I have no record prior to the update of the options I guess this is a possibility. Does anyone know if this has changed in any way?
OK spoke too soon. Changing the efi option to use grubx86 instead of shim does not work, at least if left a long time powered off. The initial test was a restart followed by a hibernate and immediate power up and it resumed OK however today it has been off some7 hours and now it re-booted rather than resume from hibernate. This is very annoying as prior to the last set of updates last weekend it worked perfect;y for absolutely months.
So has anyone any ideas as to how to debug this or ideas as to why?
I think I now understand why I was seeing this. Using Yast Boot Loader I had the Enable secure boot support ticked. Now what puzzled me is that all 3 laptops running 42.1 (on one) and 42.2 (on two) all have that flag turned on and they hibernate and restore with no issues at all. I’m pretty sure that was always set on on my TW system as well. So I suspect some update actually made a change which now makes it work as it probably should have all along and therefore now prevents the system from restoring from hibernation and forces a re-boot. As I said all 3 laptops using 42.n still work OK with that flag set on.
Anyway on TW with it turned off I can hibernate and restore OK.
Well I spoke too soon. Sometimes it still fails to restore from hibernate and just reboots instead. Now What I’d like to know is how can I determine from any logging why it failed to come out of hibernation? As I said sometimes it does and other times it reboots. Should there be any messages stored in any logs anywhere as to why this happens?
I guess it will depend on the cause(s). Have you investigated the systemd journal for errors? (Some filtering may be needed to isolate messages of interest)
Not sure if this characterizing the same issue
The best course of action may be to submit a bug report
Of course this morning it resumed OK from last nights hibernation. Having looked at the journal I think I’ll wait now until it fails again (as is likely) so I have a definite time frame to look at.
It might be useful to check for resume-related messages
journalctl -b |grep resume
or from the previous boot perhaps (if it failed to resume correctly)
journalctl -b -1|grep resume
You could also try hibernating works consistently using
echo disk > /sys/power/state
Some useful (kernel-based) hibernation debugging steps described here
Thanks for the information looks very useful. I’ll see what happens.
Well last night I hibernated this system and this morning it failed to resume.
Jun 15 22:06:32 Tumbleweed.crowhill systemd-sleep: INFO: running /usr/lib/systemd/system-sleep/grub2.sleep for hibernate
Jun 16 07:43:17 Tumbleweed kernel: PM: Checking hibernation image partition /dev/sdb3
Jun 16 07:43:17 Tumbleweed kernel: PM: Looking for hibernation image.
Jun 16 07:43:17 Tumbleweed kernel: PM: Loading hibernation image.
Jun 16 07:43:17 Tumbleweed kernel: PM: Failed to load hibernation image, recovering.
Jun 16 07:43:18 Tumbleweed systemd: Starting Resume from hibernation using device /dev/sdb3...
Jun 16 07:43:18 Tumbleweed kernel: PM: Looking for hibernation image.
Jun 16 07:43:18 Tumbleweed systemd-hibernate-resume: Could not resume from '/dev/sdb3' (8:19).
Jun 16 07:43:18 Tumbleweed systemd: Started Resume from hibernation using device /dev/sdb3.
As I have said the previous 3 or so resumes worked but this one failed so this is intermittent.
The intermittent nature of this issue is concerning.
For a more complete picture, please show us the following details…
FWIW, I note similar reported here (not sure if its related though)…
While researching online I stumbles across this archlinux thread. An interesting discussion, but may or may not be relevant to your situation.
As requested (and I must point out following a successful resume from hibernate):-
total used free shared buff/cache available
Mem: 8167748 1711552 4152500 41580 2303696 6108708
Swap: 10482684 0 10482684
/dev/sda1: UUID="F619-0DF5" TYPE="vfat" PARTUUID="e26d7a71-7d17-4d76-8cf5-66d3d57bce5b"
/dev/sda2: UUID="3319c45e-3c31-4c59-ad85-481805a0328f" TYPE="ext4" PARTUUID="dd81d16e-a335-43fd-bb8b-0ea22371163a"
/dev/sda3: UUID="e0278fe2-a12a-4fce-b9b7-d83a8d02193a" TYPE="swap" PARTUUID="f2bfe9dc-8284-48a7-8515-77f3aa1297fd"
/dev/sda4: UUID="aceabd8e-87fb-4e7c-8be2-ed31a684dbc8" TYPE="ext4" PARTUUID="41bcc635-f750-4ecb-9971-3c909b3b5506"
/dev/sdb1: SEC_TYPE="msdos" UUID="2527-1397" TYPE="vfat" PARTLABEL="primary" PARTUUID="2eca2750-fed6-4e3f-887f-27f183020fda"
/dev/sdb2: UUID="17e8a259-e43e-46c3-89ba-7c97536998ee" UUID_SUB="d497bd8e-379f-48b8-80af-a577dc4c6305" TYPE="btrfs" PARTLABEL="primary" PARTUUID="01e10395-f0e9-4fbf-ae27-7e9afa3d230a"
/dev/sdb3: UUID="33bc4e7f-d872-4d01-8179-2effa922e8ce" TYPE="swap" PARTLABEL="primary" PARTUUID="4ce923f6-746f-4a06-9851-92964067d659"
/dev/sdb4: UUID="624a1eab-8690-4830-ae83-40ed57451586" TYPE="ext4" PARTLABEL="primary" PARTUUID="b6153db7-b82a-46d7-94a3-e34f50da3c08"
/dev/sdb5: LABEL="Work" UUID="5ea8f98f-8568-45cd-ab98-ed436efa2b4a" TYPE="ext4" PARTUUID="9cb344e3-0302-4aa5-87f9-fc8307431025"
/dev/sdc1: UUID="1C485D82485D5C18" TYPE="ntfs" PARTUUID="cf70cf70-01"
/dev/sdd1: LABEL="Graphics" UUID="7961FAA524B47015" TYPE="ntfs" PARTUUID="6da16da1-01"
/dev/sde1: UUID="7b749ea6-dd07-4984-aa6b-325b5761011e" TYPE="ext3" PARTUUID="1f1e1f1d-01"
/dev/sde5: UUID="5d31f76b-bfd9-4f11-a88d-bbb723df8f1a" TYPE="ext3" PARTUUID="1f1e1f1d-05"
/dev/sde6: UUID="6c21a856-6303-46e2-ae8b-770c97ad0e24" TYPE="ext3" PARTUUID="1f1e1f1d-06"
/dev/sdf1: LABEL="Kdenlive1" UUID="6d2611f7-c1bb-49ba-8a1d-084f26b2b1cb" TYPE="ext4" PARTUUID="d4c1d4c1-01"
/dev/sdf2: UUID="e3954a91-421b-436a-ace5-f7e1c374625f" TYPE="ext4" PARTUUID="d4c1d4c1-02"
/dev/sdf3: UUID="cc20ba60-b8a7-4f25-bdf3-bd2d546f83ea" TYPE="ext4" PARTUUID="d4c1d4c1-03"
NAME TYPE SIZE USED PRIO
/dev/sdb3 partition 10G 0B -1
I’ve looked at those other reports and I’m not sure they apply. This system normally only boots TW, it does have a Leap 42.2 partition as well but not normally used and certainly not booted between hibernate and resume failure. I dont see any of the error messages quoted either.
The computer UEFI/BIOS firmware has to ensure that the physical memory is kept consistent (size and location) across the S4 hibernation transition. From reading some accounts, it’s clear that this isn’t always the case for some reason, leading to the hibernation image being discarded and rebooting instead. Ultimately, a bug report will need to be submitted to help progress this I think.
Do I have enough so far to open a bug or is there anything else I should obtain prior to doing this?
I think so. Your UEFI/BIOS details may also be relevant here (assuming hardware-specific issue), so dmidecode output could be useful…
Be prepared for requests for further information or tests. Post the link to the bug report here so that others can easily find it.
I have not yet raised a bug, the reason being is that since I enabled the IOMMU in the BIOS it has resumed each time OK. I dont know how the IOMMU got disabled as I thought it was turned on… Now I have no idea whether or not the IOMMU is the reason but I think I’ll wait and see if it starts happening maybe less frequently before raising a bug. Right now I would be unable to test anything or gather diagnostics on failure.