Testing of 13.2 daily snapshots.

(from factory mail message)

The development plan is changed. So this is a place where you can discuss your testing of one of the daily snapshots.

Yesterday, I installed “openSUSE-FTT-DVD-x86_64-Snapshot20140528-Media.iso”

That’s the file name of the iso that I downloaded via the OpenQA site.

As you can see from that name, it is a snapshot from May 28, 2014 and is for 64-bit architecture. It is the DVD image.

I wrote that to a USB (using “dd_rescue”), booted in UEFI mode and installed.

I have not yet done much testing of the software. So this is a report on installation.

Installer

The installer looks different. This is probably just a change of theme. I like the change.

Partitioner

The partitioner suggested “btrfs”. Apart from my reservations about “btrfs”, the proposal is ridiculously long with a list of “btrfs” subvolumes. IMO, they need to make that shorter and simpler (or go back to “ext4”).

Not wanting that, I looked for the “Import partitioning” option. It was not there. I’m sorry to see that, as I have often found that useful.

The alternative was to select “Create Partitioning” and then, on the next screen, “Custom (expert mode)”. That allowed me to setup the partitioning as I wanted it.

On exiting partitioning, I received a warning message:

  • Warning: With your current setup, your openSUSE installation will encounter
    problems when booting, because you have no FAT partition mounted on
    “/boot/efi”.

The message is bogus. I had assigned a mount for “/boot/efi”. This is reported as Bug 869716. Fortunately, I was able to tell it to ignore the “problem” and continue.

Booting

The system booted into emergency mode every time. This turns out to be Bug 878473, a problem with LVM handling. The workaround suggested in the bug report solved the problem.

Yast

Yast Software Management crashes. This is the same problem reported for M0. The fix is to delete “libproxy1-config-kde4”. And you have to also blacklist that package, or it will be reinstalled. The curses version of Yast works, and can be used to delete that package. Or you can start Yast as:

yast2 --gtk

to get the “gtk” version, which also works.

If you install with Yast software manager (at least with the “qt” version), then it stays active after the install, allowing you to review the install logs. I like this change.

Icewm

Rebooting from the icewm logout menu does not work (it does nothing). Reported as Bug 880774.

ssh

During the install, I clicked the option to start the ssh service and open the firewall. However, the installed system did not start “sshd”, though it had opened the firewall. I have not reported this as a bug. I would first like to see if others have the same problem.

On Fri 30 May 2014 07:26:02 PM CDT, nrickert wrote:

PARTITIONER
The partitioner suggested “btrfs”. Apart from my reservations about
“btrfs”, the proposal is ridiculously long with a list of “btrfs”
subvolumes. IMO, they need to make that shorter and simpler (or go back
to “ext4”).

Not wanting that, I looked for the “Import partitioning” option. It was
not there. I’m sorry to see that, as I have often found that useful.

Hi
You should be able to just rescan devices, and it will show the
existing partitions with no selections, that’s what I always do…

The additional subvolumes exist in the fstab as well, it maybe tied in
with bootable snapshots (but that’s just an uneducated guess from
me…).


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE 13.1 (Bottle) (x86_64) GNOME 3.10.1 Kernel 3.11.10-11-desktop
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

Hi

At Factory mailing list Mr. Stephan Kulow posted about the design of the installer: http://lists.opensuse.org/opensuse-factory/2014-05/msg00222.html - maybe useful in case of further questions why it looks different at the moment. However I had only time to watch the testing videos and SuSE 12 will look very fresh (would be great to test 12 beta).

I agree upon the suggested fs as default choice and partitioner. Not for human (casual user) consumption ;).

Regards

Needed for excluding some system directories from snapshots.

On Sat 31 May 2014 03:16:01 PM CDT, consused wrote:

malcolmlewis;2646404 Wrote:
> Hi
> The additional subvolumes exist in the fstab as well, it maybe tied in
> with bootable snapshots (but that’s just an uneducated guess from
> me…).
Needed for excluding some system directories from snapshots.

Hi
Yes, but you would think this could be handled differently, than
increase the fstab entries, maybe in the future?


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE 13.1 (Bottle) (x86_64) GNOME 3.10.1 Kernel 3.11.10-11-desktop
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

Hmm, didn’t notice them recently in my Tumbleweed fstab. Maybe 'cause I arrived by zypper dup from 12.3, or it’s new in 13.2 M0 as your post implies?

In my untouched 13.2 MS0 mount gives:


 
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=499280k,nr_inodes=124820,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
/dev/sda2 on / type btrfs (rw,relatime,space_cache)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=29,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
tmpfs on /var/run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /var/lock type tmpfs (rw,nosuid,nodev,mode=755)
/dev/sda2 on /var/tmp type btrfs (rw,relatime,space_cache)
/dev/sda2 on /var/spool type btrfs (rw,relatime,space_cache)
/dev/sda2 on /var/opt type btrfs (rw,relatime,space_cache)
/dev/sda2 on /var/lib/pgqsl type btrfs (rw,relatime,space_cache)
/dev/sda2 on /var/log type btrfs (rw,relatime,space_cache)
/dev/sda2 on /var/lib/named type btrfs (rw,relatime,space_cache)
/dev/sda2 on /var/lib/mailman type btrfs (rw,relatime,space_cache)
/dev/sda2 on /usr/local type btrfs (rw,relatime,space_cache)
/dev/sda2 on /var/crash type btrfs (rw,relatime,space_cache)
/dev/sda2 on /tmp type btrfs (rw,relatime,space_cache)
/dev/sda2 on /srv type btrfs (rw,relatime,space_cache)
/dev/sda2 on /opt type btrfs (rw,relatime,space_cache)
/dev/sda2 on /boot/grub2/x86_64-efi type btrfs (rw,relatime,space_cache)
/dev/sda2 on /boot/grub2/i386-pc type btrfs (rw,relatime,space_cache)
/dev/sda3 on /home type btrfs (rw,relatime,space_cache)
192.168.0.1:/home on /home/***/ss type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.0.1)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)


Give us a another picture of complexity or?

Regards

How? Subvolume can exist only in one place in filesystem tree. As soon as you create clone (with intention to mount it as alternate root) this clone loses access to subvolume. So you need associate “old” subvolume with “new” mount point. I.e. - you have main rootvol (which is mounted on /) and rootvol/var (which implicitly is accessible as /var in this case). Now clone rootvol into rootclone. If you now mount rootclone as /, /var will be empty, unless you explicitly mount rootvol/var on rootclone/var.

Whether so many subvolumes (by default) are really needed … I do not know. Personally I think having /var would be more than enough.

On Fri, 30 May 2014 18:46:01 GMT
nrickert <nrickert@no-mx.forums.opensuse.org> wrote:

> The development plan is changed. So this is a place where you can
> discuss your testing of one of the daily snapshots.

Tried installing from KDE-live snapshot-20140530 but got a black
screen of death when trying ‘installation’ so tried ‘KDE-live’ and
installing from there but got a message ‘base product not found’. So,
that was that. I usually use DVD install or occasionally NET so I’m not
sure what is supposed to happen with ‘live’.

The Gnome- and KDE-live and rescue ISO’s are all that are available on
http://download.opensuse.org/factory/iso/ currently.


Graham P Davis, Bracknell, Berks.
openSUSE 13.2-m0 (64-bit); KDE 4.13.1; AMD Phenom II X2 550 Processor;
Kernel: 3.15.0-rc6; Video: nVidia GeForce 210 (using nouveau driver);
Sound: ATI SBx00 Azalia (Intel HDA)

That sounds like the problem I had with the KDE-live from Milestone 0. The live installer still depended on the second stage of install following the reboot. But that second stage no longer exists.

For the recent snapshot, I avoided live installers, due to this factory mailing list thread. That’s why I went to the openQA site, and downloaded the DVD iso from there. If you can find a line listed as DVD with a recent build, click on the magnifying glass to get more details. And those details include a download link (with no md5 or similar checksum).

An interesting experience today.

On my install from that factory snapshot, I changed the repos to factory, and updated with “zypper dup”. That was about 3 days ago, and went fine.

Today, I tried another update. Again


# zypper dup

There were a bunch of updates, so I told it to go ahead. I let it run unattended.

When I returned, maybe 15 minutes later, there was a login screen. Oops. Apparently the desktop had crashed during the update.

Then I ran “zypper dup” again, to finish the incomplete job. Oops again. It complained of a missing library.

OK. Lesson learned. Never do “zypper dup” from within the GUI. Instead, do it from a console where there is less that can break.

The fix: I downloaded the live rescue image (again, a recent factory snapshot).

Booting the rescue system, I mounted the root of the failed system to “/mnt”. Then I mounted other file systems. I’m not sure it was needed but, just in case, I did bind mounts of “/dev”, “/proc” and “/sys”.

I know that I could not chroot, because zypper would still fail on the same missing library. So, instead, I ran


# zypper -R /mnt dup -D

The output looked good. I repeated without the “-D”. It completed the earlier failed update. I think it did not download anything. It used the packages that had already been downloaded.

And my factory system now seems fine again. In particular, “zypper” now runs without a missing library problem, and reports that I am fully up to date.

On 2014-06-04 03:56 (GMT) nrickert composed:

> An interesting experience today.

> On my install from that factory snapshot, I changed the repos to
> factory, and updated with “zypper dup”. That was about 3 days ago, and
> went fine.

> Today, I tried another update. Again

> Code:
> --------------------
> # zypper dup
> --------------------

> There were a bunch of updates, so I told it to go ahead. I let it run
> unattended.

> When I returned, maybe 15 minutes later, there was a login screen.
> Oops. Apparently the desktop had crashed during the update.

> Then I ran “zypper dup” again, to finish the incomplete job. Oops
> again. It complained of a missing library.

> OK. Lesson learned. Never do “zypper dup” from within the GUI.
> Instead, do it from a console where there is less that can break.

http://lists.opensuse.org/opensuse-factory/2014-06/msg00009.html

I long ago in Factory got in the habit of doing zypper dup from tty3, without
X running, and doing it only after:
zypper -v in zypper libzypp libsolv-tools glibc udev systemd dracut
perl-Bootloader yast2-bootloader

and usually only after removing all but running kernel and after ‘zypper al
kernel*’, then after everything else is done: ‘zypper rl kernel*; zypper -v dup’

Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata *** http://fm.no-ip.com/

On 2014-06-04 02:04 (GMT-0400) Felix Miata composed:

> On 2014-06-04 03:56 (GMT) nrickert composed:

>> An interesting experience today.

>> On my install from that factory snapshot, I changed the repos to
>> factory, and updated with “zypper dup”. That was about 3 days ago, and
>> went fine.

>> Today, I tried another update. Again

>> Code:
>> --------------------
>> # zypper dup
>> --------------------

>> There were a bunch of updates, so I told it to go ahead. I let it run
>> unattended.

>> When I returned, maybe 15 minutes later, there was a login screen.
>> Oops. Apparently the desktop had crashed during the update.

>> Then I ran “zypper dup” again, to finish the incomplete job. Oops
>> again. It complained of a missing library.

>> OK. Lesson learned. Never do “zypper dup” from within the GUI.
>> Instead, do it from a console where there is less that can break.

> http://lists.opensuse.org/opensuse-factory/2014-06/msg00009.html

> I long ago in Factory got in the habit of doing zypper dup from tty3, without
> X running, and doing it only after:
> zypper -v in zypper libzypp libsolv-tools glibc udev systemd dracut
> perl-Bootloader yast2-bootloader

Oops, forgot to include rpm.

> and usually only after removing all but running kernel and after ‘zypper al
> kernel*’, then after everything else is done: ‘zypper rl kernel*; zypper -v dup’

Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata *** http://fm.no-ip.com/

The live isos even from openqa have persistence enabled so even though the live installer doesn’t work they are good for testing…

The live installer seems to fail with the boot loader proposal. Any ruby programmers here?

Hmmmm…
That looks like a dupe of a bug I submitted ages ago on 13.1 but I referred to the environment as “Minimal X” and not “IceWM”

Actually, “Minimal X” has been a buggy Desktop option and a step child for the longest time, up until and including 12.3 it was something else but became IceWM in 13.1. Will it be fixed for 13.2? Well, “Minimal X” hasn’t worked completely right going at least as far back as 12.2 (and I didn’t start to use Minimal X until then).

As I noted in my bug report, my solution was to simply not shutdown or reboot using the graphical menu option, I do it from a root console (and it works fine).

If anyone wants to work on this, since my workaround works IMO it provides a clue what is wrong with IceWM and possibly how to address.

TSU

I obliquely referenced the large number of mount points in my “Decision on BTRFS?” thread which went largely unnoticed.

I assume that these mount points are recommended for partitioning support for BTRFS but that’s all it is… an assumption. I guess as I use that machine more I expect to see those mount points populated which may provide some justification for the design.

But, as I also noted those mount points seem to be largely virtual. Despite large partitions allocated for each mount point, the real physical resources are far less, ie many mount points are 999mb each whereas total RAM is 2GB and physical disk space is 8GB). Will be interesting if I run into resource starvation on such a tiny system.

TSU

Are these live installers related to any specific install image?
I didn’t have any problem with the network install into a new, empty VMware Guest.
Since there was no prior upgradeable system, I didn’t notice anything unusual and assumed that there had to be a 2nd stage install (first stage always is disk partitioning, 2nd stage actually copies files to the disk)

TSU

Traditionally, opensuse has installed “icewm-lite”. I usually install “icewm-default”, which results in “icewm-lite” being uninstalled. The difference is that “icewm-default” has slightly better panel support. For example, on a laptop, you can actually run “nm-applet” and configure WiFi connections in Icewm.

13.2, as installed from the factory snapshot, shows up as including both “icewm-default” and “icewm-lite”. So I guess that they are not longer seen as in conflict.

As far as I know, the reason “Icewm” is always installed, is that the Yast installer runs on top of it. But, now that they have eliminated the final stage of install after the reboot, they probably only need “Icewm” on the DVD and don’t need to have it installed. So maybe it will stop being an automatic install at some future time.

As I noted in my bug report, my solution was to simply not shutdown or reboot using the graphical menu option, I do it from a root console (and it works fine).

I agree that it is not a serious problem. One can logout from “Icewm”, and then shutdown from the login screen (“kdm” or “gdm” or “lightdm”).