a start job is running for dev-sdc1.device - after plugging off old harddisc

Hi,
Some months ago i had a hard disc defect which causedmy system to spam a lot of hard disc errors during boot, but does not prevent it from booting to opensuse (13.1) in the end, but hard disc access was accompanied with those errors all the time. So i bought a new hard disc put it in next to the old one and did a full reinstall (opensuse tumbleweed) and everything was fine (i ignored the old hard disc from this moment).
But because i normally do not reboot my system, and if i do i do it when i am about to going home (i do not watch the process) i did not noticed a problem that came along with the fact that i still had the old broken hard disc plugged in the system.
(I also do not noticed that during the reinstallation on a the new disc the installed opensuse on the old disc was recognized and added to the grub menu.)
The problem was that during boot i still got those error message, probably because there are some obligatory hard disc accesses during boot.
So after noticing that, i just unplugged the old disc and did a reboot, this time i got no error messages but it seems that systemd is still looking for that old hard disc, which gives the error:
a start job is running for dev-sdc1.device
with a 1:35 minute time out!
How can i remove those reverences?
I noticed two things:

  1. /etc/fstab has not listed the old hard disc
  2. in the yast bootloader kernel parameter settings there is this entry: resume=/dev/sdc1
    i do not know why it is there, can i remove it?

So you can hibernate your system. By default installer puts reference to swap partition.

can i remove it?

Of course if it is wrong you should remove it. You may consider replacing /dev/sdc1 with your current swap partition or removing it completely if you never intend to hibernate.

I too got this error. When I installed Gecko Linux I only had the SSD boot drive connected to the system, and the installer partitioned it sda1, sda2, sda3.

After install, I connected a Promise controller with two 2TB drives in a linux-raid stripe, these drives became sda and sdb and my SSD boot drive then became /dev/sdc and I get this message at boot, after a 1min 30sec timeout:

A start job is running for dev-sda1.device.

Followed by:

[TIME] Timed out waiting for device dev-sda1.device.
[DEPEND] Dependency failed for Resume from hibernation using device /dev/sda1.

I need to edit the file that points to the swap partition as /dev/sda1, and change it to /dev/sdc1 so that the message will not only go away, but the system will suspend and resume properly.

What file would this information be stored in?

Assuming you can boot or have a live Linux disk/USB show us fdisk -l

Is the promise card a true hardware or FAKE RAID (BIOS assisted) card? Promise sells both kinds I beleive and it is often hard to tell since they don’t document it well at least in their ads

Thanks to the writer of the Gecko Linux TUMBLEWEED distro I am using, he pointed me to the YaST2 Boot Loader, where in the Kernel Parameters tab, the information was stored, showing the path the system was looking for:

“resume=/dev/sda1 splash=silent quiet showopts”

Changing this to:

“resume=/dev/sdc1 splash=silent quiet showopts”

Resolved the timeout on boot, have yet to test if the system will suspend or hibernate and then resume properly.

Yep this one is fake raid, I use it as a dumb SATA controller and don’t define any arrays, I used mdadm in Linux Mint 18 to define the RAID 0 stripe, and when I connected the controller in Gecko Linux and ran mdadm, it found the array and mounts it correctly and all is well. I was VERY HAPPY to be able to migrate the array to a different linux!

Editing the YaST2 Boot Loader fixed the issue with the timeout on boot. Y’all were very helpful thank you very much!