It is so, because you told the system to. Having a system that ignores any errors is ridiculous. It would quickly bring your number of posts from 48 to 48000, or keep it stuck at 48 for the rest of our days. Your RAID should be OK first.
> The problem is the machine doesn’t have a monitor and in single mode
> there is no chance to connect via ssh…
That is a common raid issue. If any of the array components is found with
errors, the arrays is broken (degraded) and must be repaired.
In a raid 5 setup, you can loose your data if 2 of the disks fail, so
recovery is a delicate operation that has to be managed by a system
administrator and not randomly by the operating system itself.
> Can I override “It is so, because you told the system to”?
Well, if you configured your /etc/fstab to detect and mount a raid volume
that actually has some problem, I find quite normal the system is reluctant
to boot “normally”.
Just review the errors you are getting in the logs and check the status of
the array.
found this old thread… I also don’ t like the boot to fail because some less important disks fail.
I was looking for a better solution, but until now I put ’ noauto’ in fstab:
/dev/sde1 /bigdisk ext3 noauto 0 0
then in an init script such as /etc/init.d/boot.local:
mount /bigdisk
> found this old thread… I also don’ t like the boot to fail because
> some less important disks fail.
> I was looking for a better solution, but until now I put ’ noauto’ in
> fstab:
> /dev/sde1 /bigdisk ext3 noauto 0 0
>
> then in an init script such as /etc/init.d/boot.local:
> mount /bigdisk
>
> no the neatest but… it works.
You can use “nofail” instead.
–
Cheers / Saludos,
Carlos E. R.
(from 11.2 x86_64 “Emerald” GM (Elessar))