13.2 crazy install behaviour

Hello everybody,

Today i experienced the most crazy install behaviour:

  • I upgraded the system from 12.2 to 13.2 - clean install with backup of /etc & ~ folder.
  • After install i logged in as normal user in console and copied/moved most of my previous setting from backup. Without reboot i logged in in graphical mode and i made more tuning to configuration & also I compiled & installed some programs. After that i made a system update and I rebooted the system.
  • After reboot I logged in and all my previous backup restores & configurations were gone! It was like a newly installed system - no restored folders in ~ folder, but the previous installed programs were present - WTF!
  • After this spooky episode I copied again the folders from previous ~ folder and rebooted the system and the settings survived the reboot (still)

What should i expect from this new **** version?

My config:

  • root on SSD mounted by id with noatime/acl/user_xattr/discard
  • home on RAID 1 mounted by id with acl/user_xattr
  • tmpfs mounted on ram

I see now that i don’t have the /dev/mapper anymore! It was displayed on install and now is gone… again WTF!

On 2014-12-13 20:56, pixecs wrote:
>
> I see now that i don’t have the /dev/mapper anymore! It was displayed on
> install and now is gone… again WTF!

which means that you copied your files to a destination that is not
mounted, so it “disappears”.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

So if it was not mounted how do you explain that i was able to login in graphical mode? Is active only one HDD from hardware raid?
The /dev/mapper, which was displayed on install, is not available anymore, instead in partitioner it display the component hdds. So how can i add again the hardware raid to linux devices?

On 2014-12-13 22:06, pixecs wrote:
>
> robin_listas;2682706 Wrote:
>> On 2014-12-13 20:56, pixecs wrote:
>>>
>>> I see now that i don’t have the /dev/mapper anymore! It was displayed
>> on
>>> install and now is gone… again WTF!
>>
>> which means that you copied your files to a destination that is not
>> mounted, so it “disappears”.

> So if it was not mounted how do you explain that i was able to login in
> graphical mode?

Because the /home directory did exist, but nothing was mounted there.
You copied your files to a new place. If /home mounts later, the
previous contents are not visible.

Or the other way round, in any case contents “disappear” from sight.

You have to find out why your raid is not always mounted.

> Is active only one HDD from hardware raid?

That’s another possibility.

> The /dev/mapper, which was displayed on install, is not available
> anymore, instead in partitioner it display the component hdds. So how
> can i add again the hardware raid to linux devices?

would it not be /dev/md0, etc, for raid?
I think the mapper device is used for encrypted devices.

Wait, you said hardware raid… real hardware, or fake raid?

No, I don’t know to solve this raid part. Sorry.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

Yes is a real hardware raid - the motherboard is crosshair 4 formula.
I found why the raid was not found: on install i made my own partitioning scheme so i mounted the home partition using the UUID of first HDD/first partition on raid. Yeah, i know, a stupid move :frowning:
I reinstalled the system using the whole first raid partition (by device name) for home and now it refuse to boot because lvm2 is waiting for a partition named different than the one showed on setup.

And in the end i installed 13.1 because the latest version is way too weird for my taste - i’m thinking switching to some other decent kde distribution because for now i’m disappointed by how the new version is handling raids. Thanks for your support.

Looking at the specs for the board you reference it looks to me that this is a FAKE RAID or also called BIOS assisted RAID. It is not true hardware RAID. Thus it may require drivers. have you checked?

Fake or not it is supposed to work once it was detected on install, right?
I used some files from /etc/udev from previous install (70-kpartx.rules & 71-kpartx-compat.rules) and i managed to convince udev to detect the raid but not the raid partitions so i throwed the towel.
dmraid & lvm were installed and also on install all the raid partition were shown.
The error after install & reboot was: lvm2 was searching for raid partitions which began with \x2d (like /dev/mapper/pdc_diibbhefei\x2dpart5).
I compiled & installed the latest lvm package (2-2.02.114-160.1) which was supposed to solve the issue but it didn’t .

Some FAKE work some don’t without some form of driver.

Trouble is there is no true standard for FAKE RAID each chip set can be a bit different. The chip set and BIOS only provide help on boot then it turns into software RAID.

The raid is AMD without any entry in BIOS. It presend itself only on boot (CTRL+F) with his own bios so i supose is not fake raid.

Also in previous version it worked out of the box so i expect to work in the newer version too.
I don’t have any problems compiling the kernel (i’m also a c/c++ programmer) but it should be displayed a warning/info if i’ll need to to that, or if i should compile/search for some drivers - not to be let in dark after upgrade.

So from my point of view the latest version is cr*p - and i’m looking for something better, preferably without this systemd mess.

No, it is not “real hardware raid”.

Thanks for your opinion, i’m not starting a flame war but my point is even if in previous os reincarnations the raid worked out of the box now is not working and no hint how to solve this issue is given,

Without knowing the exact hardware specifications (which you haven’t posted yet) - it’s impossible to say whether it’s a specific chipset issue or a kernel issue.

Here is my system: MB: Asus Crosshair 4 formula (CROSSHAIR IV FORMULA - Support), Proc: AMD Phenom II X6 1090T, Mem: 16GB Corsair CL9 Dominator, Hdd: / on OCZ-AGILITY3, home on raid 1: WDC WD1002FAEX + ST31000524AS, raid is created using amd raid utility (has his own bios accesible after boot with ctrl + f)

Raid 1 using sata 1 & 3 (first 4 sata in raid mode), ocz on sata 5, dvdrw on sata 6, another WDC WD10EZEX on jmicron controler, and if has any importance a r9 290 (http://www.asus.com/Graphics_Cards/R9290DC24GD5/)

On 2014-12-14 19:26, pixecs wrote:
>
> The raid is AMD without any entry in BIOS. It presend itself only on
> boot (CTRL+F) with his own bios so i supose is not fake raid.

No. True hardware raid is pretty expensive and relatively high end. All
motherboards I read about are fake raid, but it is not easy to know for
certain because they don’t tell you so.

Fake raid only supports reading in hardware, not writing, so that the
system can boot. Once booted, the operating system loads a driver,
specific for that chipset, and the processing is then done by the CPU,
same as for normal software raid.

The manufacturers write the driver for Windows, seldom for Linux. If you
need a driver, and it looks you do, you have to, well, do things to get
it and load it. Maybe you are lucky and the kernel has support for your
raid, internally.

(and if the driver is from the manufacturer, you need they redo and
publish it for every kernel version)

<http://en.wikipedia.org/wiki/RAID#FAKE>
<http://en.wikipedia.org/wiki/Mdadm#EXTERNAL-METADATA>

You have to learn what you really have and how to make it work in Linux,
and be aware of it on installs/updates, because it causes problems.

IMO, I would not use it.

I would use plain Linux native software raid, unless I were double
booting with Windows.


Cheers / Saludos,

Carlos E. R.
(from 13.1 x86_64 “Bottle” at Telcontar)

Your only question was What should i expect from this new **** version?. It did not look like something that required an answer. And if you have hardware RAID, I have no hint until you tell what controller you have. But if we accept theoretical posibility that you may have fake raid, enabling dmraid-activation.service may help.

systemctl status dmraid-activation.service
systemctl enable dmraid-activation.service
systemctl start dmraid-activation.service

If it was already enabled, the first command may give some hint what it did during start.