How to stop systemd from starting a service/application at boot

Hi Folks,
I am new to systemd. I have been reading the man pages but still can’t figure out how to stop systemd from starting a particular service/application at boot time. I am having problems with my md RAID array and I am trying to remove a disk but mdadm won’t let me because some application has a connection to it. There are a number of md processes running and ‘kill -9’ isn’t killing them. I want to stop systemd from starting md at boot then reboot.??? If anyone can give me a better way to do it let me know but I would still like to know how to control systemd. Thanks in advance for any help you are willing to throw my way.

Probably best to make such changes from a live DVD/USB (openSUSE rescue disk). Changing basic stuff while running on it is like changing your tires while driving down the road :stuck_out_tongue:

if you are looking for a good tutorial on how to use systemd watch this talk by Redhate:

https://www.youtube.com/watch?v=S9YmaNuvw5U

Is the RAID array mounted? You probably should tell us more about how your RAID array is configured for others to advise further.

Thanks to those who replied. You are right of course. When I started the post I was focused on what I intended to do to fix the md problem. Maybe I should just post a new thread about the MD. The system is on an 80G drive. I have a RAID5 made of 3-500G SATA drives which would normally be mounted on /data and right now the RAID is not mounted but there are a number of MD processes running. I created the array about 5 years ago and it hasn’t given me any problems so I have forgotten everything I learned about mdadm. It is very possible if I was giving the correct commands this would all work without trying to stop the MD processes. The issue I am having with the RAID5 is that one of the disks is reading faulty. I don’t think there is any problem with the disk hardware because I can do an fdisk on the disk and recreate the partition (whole disk). I am trying to re-add the disk so MD will rebuild it but I can’t get the disk back into the array. When I examine the disk it shows the disk as being “faulty removed”. I don’t remember all the different mdadm commands I have tried but nothing I have tried has worked hence this post. I probably should include the fact that I didn’t see this problem show up until I upgraded the system from opensuse 12.3 to Leap 42.1. When Leap took over it changed the names of the drives. The system has two 80G drives and the three 500G drives for the RAID. Under 12.3 the RAID drives were sda, sdb and sdc. The 80G drives were sdd and sde. Under Leap 42.1 the 80G drives are (system) sda and spare sdb. The RAID drives are sdc, sdd and sde. The first two are looking OK the third is the one mdadm says is “faulty removed”. After Leap started up I couldn’t get /data mounted. I discovered the name changes and had to adjust /etc/mdadm.conf to get it to mount. Now that I have given (I think) all the details what is the solution? I have faith in your colective wisdom. :wink:

Before you try to add that disk back into your array,
You should install SMART tools if needed(actually, it may be a SMART notification that’s telling you the disk is failing) and read the statistics stored in your drive…
You may find that the disk is in the process of failing, but the drive has been automatically adjusting (likely moving disk blocks around) to compensate.
If your disk is in the process of failing, that disk may only have a few weeks at most before it will fail completely and then you’d be relying entirely on your RAID to protect from data loss.

Note that these kinds of physical errors won’t be reported or visible to the file system.

That said,
I have a 3TB Seagate SATA drive that first triggered an “About to fail” warning about 2 years ago.
I tracked the error and found that although triggered, there didn’t seem to be any progression.
I recognize that the data on that disk is at risk and requires constant attention in case the problem becomes something unrecoverable.
It’s the <only> disk I’ve ever owned or managed that triggered and didn’t become a major issue in short time, practically all disks that triggered this kind of error failed completely very shortly thereafter.

TSU