I am running OBS Appliance 2.8.2 based on openSUSE 42.2. When the appliance boots I get error “Failed to start LSB: Finds the storage device to be used for OBS server and/or worker.” I followed the setup instructions to create a LVM volume group with the name “OBS”. I can see the LVM group is in good shape. I have include systemctl dump of status obsstoragesetup.service below. In addition when I try to run “zypper up” I get Error Code: Write Error. I checked root drive and it is at 100% utilization. Before update process starts I get a notification stating my repository is outdated and should consider and update, should I be looking at changing repo? At this point being that my OBS LVM is on a separate storage drive should I just reinstall with the latest appliance image?
● obsstoragesetup.service - LSB: Finds the storage device to be used for OBS server and/or worker Loaded: loaded (/etc/init.d/obsstoragesetup; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2018-03-06 18:44:59 UTC; 16min ago
Docs: man:systemd-sysv-generator(8)
Process: 1365 ExecStart=/etc/init.d/obsstoragesetup start (code=exited, status=1/FAILURE)
Mar 06 18:44:59 linux obsstoragesetup[1365]: OBS_SERVER_SIZE=38912
Mar 06 18:44:59 linux obsstoragesetup[1365]: TOTAL_SWAP_SIZE=1024
Mar 06 18:44:59 linux obsstoragesetup[1365]: FINAL_VG_SIZE = 12284 - 38912 - 25600 - 1024
Mar 06 18:44:59 linux obsstoragesetup[1365]: FINAL_VG_SIZE=-53252
Mar 06 18:44:59 linux obsstoragesetup[1365]: OBS_WORKER_ROOT_SIZE=-26626
Mar 06 18:44:59 linux obsstoragesetup[1365]: MIN_WORKER_ROOT_SIZE=4096
Mar 06 18:44:59 linux systemd[1]: obsstoragesetup.service: Control process exited, code=exited status=1
Mar 06 18:44:59 linux systemd[1]: Failed to start LSB: Finds the storage device to be used for OBS server and/or worker.
Mar 06 18:44:59 linux systemd[1]: obsstoragesetup.service: Unit entered failed state.
Mar 06 18:44:59 linux systemd[1]: obsstoragesetup.service: Failed with result 'exit-code'.
Filesystem Size Used Avail Use% Mounted on
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 9.0M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/sda1 2.2G 2.2G 0 100% /
/dev/sda2 18G 1.4G 16G 9% /var/cache/obs
/dev/mapper/OBS-server 38G 1.5G 34G 5% /srv/obs
tmpfs 396M 0 396M 0% /run/user/0