btrfs error read only

opensuse 13.2 gnome

Appeared to be working fine till ran updates then restarted.
Below notes using mobile :-0

df -h shows OS sda3 Size = 41G with Used=17G Available=24G and /home Size=424G Used=91G Available=333G.

Previous failures believed were lack of free space, IF df accurate then suggests this appears not for lack space ??

gdisk /dev/sda3
Shows partition table scan=
MBR: protective
BSD: not present
APM: not present
GPT: present

with partition3 attribute flags =4 and partition name=’’ (no name)

Some messages un/relateds found :
dir -altr shows last updated log =rkhunter log at Jun 27 08:03 which appears normal

Error in boot.log ( to Jun 27 07:32)
The stars in colors Red/Pink

OK ] Started Console System Startup Logging
***] <=red pink red
*** ] (1 of <=red pink red
** ] (1 of 3) A start job is run <=pink red
Starting Locale Service…
OK ] Started SuSEfirewall2 phase 1.

Otherwise all OK

/var/log/snapper.log (to Jun 27 04:28)
Showing a number of times
ERR libsnapper … loading grub-snapshot.cfg failed

First, welcome to the openSUSE Technical Help Forums.
If you skim other existing Forum threads, you’ll notice that whenever you post any command and its results, and any data (eg a log file snippet), you should enclose that within CODE tags, which are created easily by clicking on the hash (#) button of the Forum post editor.

So, in your situation you should know that you can roll back your system to previous BTRFS snapshots using snapper. Snapshots are created by default every time you boot your system and again when your system is shutdown. Snapshots are also created by default every time you run zypper if you used it to update your system (I don’t know today whether a snapshot is also created before and after updates installed by apper).

The following assumes you can boot either normally or to emergency mode using the Grub menu option.

Your first step should be to list your stored snapshots with the following command in a root console

snapper list

You can then rollback your system to a previous snapshot by specifying the snapshot number. The command rolls back by creating another new snapshot entry, so your action can be undone if you wish (You can verify by running the above command to list your snapshot again afterwards)

snapper roolback *number *

I expect after you roll back your system you can try updating again, maybe using zypper if you didn’t before with the following command

zypper up

You can read more about this by viewing the snapshot help with the following command

snapper --help

Or, you can read about this and plenty more in detail by reading the MAN pages

man snapper

HTH,
TSU

The boot sequence allows start at states:

1 Opensuse with Linux 3.16.7.21-desktop
2 Or (recovery mode)
3 Or with Linux 3.16.7.13-desktop
4 Or recovery mode

Tried 3 and problem loading kernel certificate … then offered login which failed ?

systemd-journald Failed to truncate file to its own size: Read-only file system

Repeats similar message with different numbers on left column (timestamps??)

Ignoring the text then type:

journalctl -xd <Enter>

Then can again login again as root

Able type here on mobile.

On terminal type and view results from

snapper list

Type: snapper rollback 602

Result: Creating read-only snapper of current system.IO Error.

see again systemd-journald Failed to truncate file to its own size Read only file ststem…

Nothing else seems happen… try restart

On 2015-06-27 05:36, paulparker wrote:

> systemd-journald Failed to truncate file to its own size: Read-only
> file system

You need boot a rescue CD of the same release and repair the filesystem.
Probably with fsck, but I’m not that familiar with btrfs.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

Was required elsewhere :frowning:

Back and located my openSUSE-Live to boot up so could read or post here a bit easier.

At least twice previously similar problems, on those occasions thought was snapper records filled the partition, so re-installed, what see appears not that, also like find better solution.

Start of boot seems normal, until select either latest or previous kernel/boot settings, then end up in terminal, failing, so use **journalctl -xd **<Enter> this appears enable READ only mode, with other text every now and then appearing on screen as well as what typed.

Can mount old /home no problem, see the 455 GB partition on Files.

Q: How to mount sda3 root / to view log files etc there ?

Else try re-boot to see if attempt to rollback worked.

in BOLD the partition root /


linux:~ # gdisk  /dev/sda
GPT fdisk (gdisk) version 0.8.10

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/sda: 976764911 sectors, 465.8 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 3A6AAD46-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 976764877
Partitions will be aligned on 2048-sector boundaries
Total free space is 4012 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            4095   1024.0 KiB  EF02  primary
   2            4096         4225023   2.0 GiB     0700  primary
**   3         4225024        88117247   40.0 GiB    EF00 ** 
   4        88117248       976762879   423.7 GiB   0700  primary

Command (? for help): q
linux:~ #

Use e2fsck to check sda3 the / for partition errors, is result another problem ?


linux:~ # e2fsck -v  /dev/sda3
e2fsck 1.42.12 (29-Aug-2014)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/sda3

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

linux:~ #

Use e2fsck to check sda4 the /home for partition errors, is result another problem ?


linux:~ # e2fsck -v  /dev/sda4
e2fsck 1.42.12 (29-Aug-2014)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/sda4

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

linux:~ # 


.

Thanks, just re-read what self posted few minutes ago, this option looks easier :slight_smile:

On 2015-06-27 11:46, paulparker wrote:

> Start of boot seems normal, until select either latest or previous
> kernel/boot settings, then end up in terminal, failing, so use
> *journalctl -xd *<Enter> this appears enable READ only mode, with other
> text every now and then appearing on screen as well as what typed.

Initially, on boot, the filesystem is mounted read only, the operating
system starts loading somethings, and runs a quick fsck. Some things are
done directly from the initial ram image, I’m unsure at which precise
instant the root filesystem is mounted r/o and used.

If the fsck succeeds, then the root filesystem is remounted r/w and boot
continues. For some reason in your case fsck or a later test failed, “/”
continues r/o, but the system attempts to start, and fails because it
can not write to disk.

This situation should be better detected and reported to the user, with
a message on how to solve it.

> Q: How to mount sda3 root / to view log files etc there ?

pointless. No logs were written. r/o.

> Else try re-boot to see if attempt to rollback worked.

Impossible. r/o.

> Use e2fsck to check sda3 the / for partition errors, is result
> another problem ?

Obviously. You are trying ext3 fsck on a btrfs partition… can’t be.
Just try plain fsck, let it figure the type.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

This is the procedure I used successfully on a read-only root partition failure with btrfs, and a problem kernel (3.17.1):Download and burn appropriate rescue CD.

  1. Boot with Rescue CD
  2. Install [the latest] btrfsprogs (if it’s not on the CD). IIRC you need 3.17 or later for Step 4. to work. Older btrfs repair tools were incomplete, I think.
  3. Run "btrfs check --repair /dev/yourpartitionid
    " on unmounted partition, requires superuser privileges. 1. If repair successful, [cross everything], and restart with a previous working kernel.
  4. Remove the bad kernel the 3.17.1 kernel.

That’s the best I can come up with, but YMMV. :slight_smile:

With corrections:

This is the procedure I used successfully on a read-only root partition failure with btrfs, and a problem (3.17.1) kernel:

  1. Download and burn appropriate rescue CD.
  2. Boot with Rescue CD.
  3. Install [the latest] btrfsprogs (if it’s not on the CD). IIRC you need 3.17 or later for Step 4. to work. Older btrfs repair tools were incomplete, I think.
  4. Run "btrfs check --repair /dev/yourpartitionid
    " on unmounted partition, requires superuser privileges. 1. If repair successful, [cross everything], and restart Tumbleweed with last working kernel.
  5. Remove the bad kernel

That’s the best I can come up with, but YMMV. :slight_smile:

On 2015-06-27 19:16, consused wrote:

> - Download and burn appropriate rescue CD.
> - Boot with Rescue CD.
> - Install [the latest] btrfsprogs (if it’s not on the CD). IIRC you
> need 3.17 or later for Step 4. to work. Older btrfs repair tools
> were incomplete, I think.

Then it would be better, perhaps, to download the Tumbleweed rescue CD,
XFCE, instead of the 13.2 rescue CD. It surely will have the most recent
version of the tools.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

Perhaps, I’m not totally sure. Thinking about it, my repair was using a Factory repair CD, 3.17 btrfsprog from repo, on a Tumbleweed root partition with kernel 3.17.1 installed which IIRC caused the corruption. After the repair, I could boot with a 3.16 kernel, and remove 3.17.1. Have the fixes in 3.17.1 been backported to 13.2’s kernel(?). Don’t know, but pretty sure 3.17 or greater btrfsprogs functionality is required for repair.

Tumbleweed for kernel and btrfsprogs is now at 4.0, a greater gap. The read only state is a result of corruption (or rolling back from a read-only snapshot), and if corruption and then the repair is successful (that’s the big if), I guess the 3.16 should boot ok. On balance, I would say a Tumbleweed rescue CD is the better risk. :wink:

On 2015-06-27 20:36, consused wrote:
> Have the fixes in 3.17.1 been backported to 13.2’s kernel(?). Don’t
> know, but pretty sure 3.17 or greater btrfsprogs functionality is
> required for repair.

The 13.2 rescue CD will not have those patches, it is forzen at release
time. That’s unfortunate for a rescue CD.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

My attempt to burn 13.2 rescue CD did not seem work, when boot it was not recognized; How check contains the Rescue ?

With advise the 13.2 rescue CD apparently without the patches, what about the Live ?

My 8GB USB with ***openSUSE-13.2-GNOME-Live-x86_64.iso *** was prepared to update entire package , chose only to update btrfsprogs

Ran btrfs check --repair /dev/sda3
results below some changes.

Again, ran btrfs check --repair /dev/sda3
results below no further changes.

Now to see happen when try boot into…


linux:~ # **zypper info --recommends  btrfsprogs**
Loading repository data...
Reading installed packages...


Information for package btrfsprogs:
-----------------------------------
Repository: openSUSE-13.2-Update
Name: btrfsprogs
Version: 4.0-7.1
Arch: x86_64
Vendor: openSUSE
Installed: Yes
Status: **out-of-date (version 3.16.2-1.1 installed)**
Installed Size: 2.7 MiB
Summary: Utilities for the Btrfs filesystem
Description: 
  Utilities needed to create and maintain btrfs file systems under Linux.
Recommends:
linux:~ #

linux:~ # **zypper up   btrfsprogs**
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following package is going to be upgraded:
  btrfsprogs 

1 package to upgrade.
Overall download size: 461.5 KiB. Already cached: 0 B  After the operation, additional 537.9 KiB will be used.
Continue? [y/n/? shows all options] (y): y
**Retrieving package btrfsprogs-4.0-7.1.x86_64**                                                                                                                                  (1/1), 461.5 KiB (  2.7 MiB unpacked)
Retrieving: btrfsprogs-4.0-7.1.x86_64.rpm ...................................................................................................................................................................[done]
Checking for file conflicts: ................................................................................................................................................................................[done]
(1/1) Installing: btrfsprogs-4.0-7.1 ........................................................................................................................................................................[done]
linux:~ # zypper info  btrfsprogs
Loading repository data...
Reading installed packages...


Information for package btrfsprogs:
-----------------------------------
Repository: openSUSE-13.2-Update
Name: btrfsprogs
**Version: 4.0-7.1**
Arch: x86_64
Vendor: openSUSE
Installed: Yes
**Status: up-to-date**
Installed Size: 2.7 MiB
Summary: Utilities for the Btrfs filesystem
Description: 
  Utilities needed to create and maintain btrfs file systems under Linux.
linux:~ # 


linux:~ # 
linux:~ #** btrfs check --repair /dev/sda3**
enabling repair mode
Checking filesystem on /dev/sda3
UUID: d5f082d3-xxxx-xxxx-xxxx-xxxxxxxxxxxx
checking extents
parent transid verify failed on 284917760 wanted 549755848687 found 34799
parent transid verify failed on 284917760 wanted 549755848687 found 34799
parent transid verify failed on 284917760 wanted 549755848687 found 34799
parent transid verify failed on 284917760 wanted 549755848687 found 34799
Ignoring transid failure
parent transid verify failed on 284917760 wanted 549755848687 found 34799
Ignoring transid failure
parent transid verify failed on 284917760 wanted 549755848687 found 34799
Ignoring transid failure
Fixed 0 roots.
checking free space cache
cache and super generation don't match, space cache will be invalidated
checking fs roots
Moving file 'user-484.journal' to 'lost+found' dir since it has no valid backref
Fixed the nlink of inode 578
Moving file 'system.journal' to 'lost+found' dir since it has no valid backref
Fixed the nlink of inode 634
Moving file 'system.journal.649' to 'lost+found' dir since it has no valid backref
Fixed the nlink of inode 649
Moving file 'user-484.journal.650' to 'lost+found' dir since it has no valid backref
Fixed the nlink of inode 650
warning line 3541
checking csums
parent transid verify failed on 284917760 wanted 549755848687 found 34799
Ignoring transid failure
checking root refs
Recowing metadata block 284917760
parent transid verify failed on 284917760 wanted 549755848687 found 34799
Ignoring transid failure
found 16231108627 bytes used err is 0
total csum bytes: 14384752
total tree bytes: 964100096
total fs tree bytes: 920109056
total extent tree bytes: 25870336
btree space waste bytes: 183005643
file data blocks allocated: 125426171904
 referenced 57030426624
btrfs-progs v4.0+20150429
extent buffer leak: start 284917760 len 16384
linux:~ # 
linux:~ # 
linux:~ # 
linux:~ #** btrfs check --repair /dev/sda3**
enabling repair mode
Checking filesystem on /dev/sda3
UUID: d5f082d3-xxxx-xxxx-xxxx-xxxxxxxxxxxx
checking extents
Fixed 0 roots.
checking free space cache
cache and super generation don't match, space cache will be invalidated
checking fs roots
checking csums
checking root refs
found 16231108627 bytes used err is 0
total csum bytes: 14384752
total tree bytes: 964100096
total fs tree bytes: 920109056
total extent tree bytes: 25870336
btree space waste bytes: 183004979
file data blocks allocated: 125426171904
 referenced 57030426624
btrfs-progs v4.0+20150429
linux:~ # 
linux:~ # 


When tried to boot… everything started as usual, and all seems to be working.

my thanks to all !

On 2015-06-28 04:16, paulparker wrote:
>
> My attempt to burn 13.2 rescue CD did not seem work, when boot it was
> not recognized; How check contains the Rescue ?

You do it the same way the gnome live cd.

> With advise the 13.2 rescue CD apparently without the patches, what
> about the Live ?

If it has the needed tools, fine.

> My 8GB USB with -openSUSE-13.2-GNOME-Live-x86_64.iso - was prepared
> to update entire package , chose only to update btrfsprogs
>
>
>
> Ran -btrfs check --repair /dev/sda3-
> results below some changes.
>
> Again, ran -btrfs check --repair /dev/sda3-
> results below no further changes.
>
> Now to see happen when try boot into…

Good! :slight_smile:


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

That was a good decision made by you. I now see that 13.2’s btrfsprogs, available from standard Oss Updates repo, is at 4.0 build from 1 June, and well above any 3.17 minimum requirement I mentioned.

Ran btrfs check --repair /dev/sda3
results below some changes.

The results (not re-quoting here) look positive for a successful reboot, fit the errors you experienced and explain the defensive read-only condition.

Glad to see it working. :slight_smile: