read-only filesystem...RPM failed: error: can't create transaction lock on /var/lib/rpm/.rpm.lock

I have a fresh install of openSuse 13.2 with Gnome that I am trying to update. There were about 8 GB in packages to install/update (some I selected to install manually) and it seems as though /var is suddenly a read-only file system.
I was towrads the end of the install/update when I started getting the error

Subprocess failed. Error: RPM failed: error: can't create transaction lock on /var/lib/rpm/.rpm.lock (Read-only file system)


My / partition is 55GB formatted as ext4. My /home partition is 402GB formatted as xfs (I got tired of btrfs filling up with snapshots on my / partition so I reinstalled with ext4). According to gParted my / drive is only 32% used.

If I try to make a directory in my roots home I get this:

linux:~ # mkdir Desktop
mkdir: cannot create directory ‘Desktop’: Read-only file system
linux:~ # 

and when I try to start Nautilus from a root console I get an error window popping up:


Unable to create required folders. Please create the following folders, or set permissions such that they can be created:
/root/Desktop, /root/.config/nautilus

I would appreciate any help on this.

Hi

… and it seems as though /var is suddenly a read-only file system

Would you verify that the /var directory is read-only by posting these out-puts:

ls -l /var
ls -l /

You could also include out-put from these two:

cat /proc/mounts
df -h

Cheers,
Olav

On 2015-05-10 22:16, marinegundoctor wrote:
>
> I have a fresh install of openSuse 13.2 with Gnome that I am trying to
> update. There were about 8 GB in packages to install/update (some I
> selected to install manually) and it seems as though /var is suddenly a
> read-only file system.

Sometimes it happens. The output of the command “mount” would tell you.
It typically happens because something went very bad, and the system
protects the filesystem from further damage by setting it suddenly read
only.

What I would do is halt the machine (which will probably crash or fail;
if so, umount home and hit the button), then reboot with another system
or a live rescue system, and run an fsck on the root partition.

Then, after reboot, I would check the log. You can try to do it now as
well, if it works. Try to find out what happened.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

Thank you both for responding so quickly. Carlos, you were correct in that “something went very bad.” I tried to reboot after starting this thread but went to emergency mode with nothing mounted. I ran fsck with the -y option on the root drive and it took about 8 minutes to go through it all. The system would boot after that but immediately crash when gnome tried to bring up its login screen. I’m not sure what happened other than some kind of catastrophic drive failure, however the drive also has my /home on it and is working fine.

I booted into the openSuse live USB, deleted everything on that /dev/sdb except the ‘/home’ partition and reinstalled with ‘/’ on my /dev/sda (solid state drive). It’s only 65GB so I’ll have to get my hands on another drive to use for ‘/home’ before /dev/sdb goes out completely.

Run smartctl to check the drive. You can have bad or even week sectors that only directly effect on area but any base sectors indicate the drive failing. These things will show in smart report

On 2015-05-11 02:26, gogalthorp wrote:
>
> Run smartctl to check the drive. You can have bad or even week sectors
> that only directly effect on area but any base sectors indicate the
> drive failing. These things will show in smart report

Agreed.

I would then run the short and long tests.
Ask again if you need details :slight_smile:


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))